00:00:00.001 Started by upstream project "autotest-per-patch" build number 127187 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.961 The recommended git tool is: git 00:00:01.961 using credential 00000000-0000-0000-0000-000000000002 00:00:01.963 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.975 Fetching changes from the remote Git repository 00:00:01.977 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.989 Using shallow fetch with depth 1 00:00:01.989 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.989 > git --version # timeout=10 00:00:02.000 > git --version # 'git version 2.39.2' 00:00:02.000 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.011 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.011 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.520 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.534 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.545 Checking out Revision 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b (FETCH_HEAD) 00:00:05.545 > git config core.sparsecheckout # timeout=10 00:00:05.557 > git read-tree -mu HEAD # timeout=10 00:00:05.575 > git checkout -f 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=5 00:00:05.595 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:05.595 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:05.696 [Pipeline] Start of Pipeline 00:00:05.711 [Pipeline] library 00:00:05.713 Loading library shm_lib@master 00:00:05.713 Library shm_lib@master is cached. Copying from home. 00:00:05.732 [Pipeline] node 00:00:05.742 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.744 [Pipeline] { 00:00:05.753 [Pipeline] catchError 00:00:05.754 [Pipeline] { 00:00:05.764 [Pipeline] wrap 00:00:05.772 [Pipeline] { 00:00:05.778 [Pipeline] stage 00:00:05.779 [Pipeline] { (Prologue) 00:00:05.981 [Pipeline] sh 00:00:06.274 + logger -p user.info -t JENKINS-CI 00:00:06.293 [Pipeline] echo 00:00:06.295 Node: CYP9 00:00:06.303 [Pipeline] sh 00:00:06.605 [Pipeline] setCustomBuildProperty 00:00:06.613 [Pipeline] echo 00:00:06.614 Cleanup processes 00:00:06.618 [Pipeline] sh 00:00:06.902 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.902 1092076 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.915 [Pipeline] sh 00:00:07.201 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.201 ++ grep -v 'sudo pgrep' 00:00:07.201 ++ awk '{print $1}' 00:00:07.201 + sudo kill -9 00:00:07.201 + true 00:00:07.214 [Pipeline] cleanWs 00:00:07.223 [WS-CLEANUP] Deleting project workspace... 00:00:07.223 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.230 [WS-CLEANUP] done 00:00:07.233 [Pipeline] setCustomBuildProperty 00:00:07.243 [Pipeline] sh 00:00:07.527 + sudo git config --global --replace-all safe.directory '*' 00:00:07.588 [Pipeline] httpRequest 00:00:07.623 [Pipeline] echo 00:00:07.625 Sorcerer 10.211.164.101 is alive 00:00:07.633 [Pipeline] httpRequest 00:00:07.650 HttpMethod: GET 00:00:07.650 URL: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:07.672 Sending request to url: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:07.680 Response Code: HTTP/1.1 200 OK 00:00:07.681 Success: Status code 200 is in the accepted range: 200,404 00:00:07.682 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:29.242 [Pipeline] sh 00:00:29.529 + tar --no-same-owner -xf jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:29.543 [Pipeline] httpRequest 00:00:29.584 [Pipeline] echo 00:00:29.585 Sorcerer 10.211.164.101 is alive 00:00:29.592 [Pipeline] httpRequest 00:00:29.597 HttpMethod: GET 00:00:29.597 URL: http://10.211.164.101/packages/spdk_7b27bb4a496ddab55a40f582dcffd2c2583d90c7.tar.gz 00:00:29.598 Sending request to url: http://10.211.164.101/packages/spdk_7b27bb4a496ddab55a40f582dcffd2c2583d90c7.tar.gz 00:00:29.603 Response Code: HTTP/1.1 200 OK 00:00:29.604 Success: Status code 200 is in the accepted range: 200,404 00:00:29.604 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7b27bb4a496ddab55a40f582dcffd2c2583d90c7.tar.gz 00:02:10.430 [Pipeline] sh 00:02:10.722 + tar --no-same-owner -xf spdk_7b27bb4a496ddab55a40f582dcffd2c2583d90c7.tar.gz 00:02:13.305 [Pipeline] sh 00:02:13.590 + git -C spdk log --oneline -n5 00:02:13.590 7b27bb4a4 isa-l_crypto: update submodule to 2.25 00:02:13.590 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:13.590 fc2398dfa raid: clear base bdev configure_cb after executing 00:02:13.590 5558f3f50 raid: complete bdev_raid_create after sb is written 00:02:13.590 d005e023b raid: fix empty slot not updated in sb after resize 00:02:13.602 [Pipeline] } 00:02:13.619 [Pipeline] // stage 00:02:13.627 [Pipeline] stage 00:02:13.629 [Pipeline] { (Prepare) 00:02:13.644 [Pipeline] writeFile 00:02:13.660 [Pipeline] sh 00:02:13.946 + logger -p user.info -t JENKINS-CI 00:02:13.959 [Pipeline] sh 00:02:14.244 + logger -p user.info -t JENKINS-CI 00:02:14.259 [Pipeline] sh 00:02:14.544 + cat autorun-spdk.conf 00:02:14.544 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.544 SPDK_TEST_NVMF=1 00:02:14.544 SPDK_TEST_NVME_CLI=1 00:02:14.544 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.544 SPDK_TEST_NVMF_NICS=e810 00:02:14.544 SPDK_TEST_VFIOUSER=1 00:02:14.544 SPDK_RUN_UBSAN=1 00:02:14.544 NET_TYPE=phy 00:02:14.552 RUN_NIGHTLY=0 00:02:14.557 [Pipeline] readFile 00:02:14.584 [Pipeline] withEnv 00:02:14.586 [Pipeline] { 00:02:14.600 [Pipeline] sh 00:02:14.886 + set -ex 00:02:14.886 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:14.886 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:14.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.886 ++ SPDK_TEST_NVMF=1 00:02:14.886 ++ SPDK_TEST_NVME_CLI=1 00:02:14.886 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.886 ++ SPDK_TEST_NVMF_NICS=e810 00:02:14.886 ++ SPDK_TEST_VFIOUSER=1 00:02:14.886 ++ SPDK_RUN_UBSAN=1 00:02:14.886 ++ NET_TYPE=phy 00:02:14.886 ++ RUN_NIGHTLY=0 00:02:14.886 + case $SPDK_TEST_NVMF_NICS in 00:02:14.886 + DRIVERS=ice 00:02:14.886 + [[ tcp == \r\d\m\a ]] 00:02:14.886 + [[ -n ice ]] 00:02:14.886 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:14.886 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:14.886 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:14.886 rmmod: ERROR: Module irdma is not currently loaded 00:02:14.886 rmmod: ERROR: Module i40iw is not currently loaded 00:02:14.886 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:14.886 + true 00:02:14.886 + for D in $DRIVERS 00:02:14.886 + sudo modprobe ice 00:02:14.886 + exit 0 00:02:14.896 [Pipeline] } 00:02:14.916 [Pipeline] // withEnv 00:02:14.921 [Pipeline] } 00:02:14.933 [Pipeline] // stage 00:02:14.941 [Pipeline] catchError 00:02:14.942 [Pipeline] { 00:02:14.955 [Pipeline] timeout 00:02:14.955 Timeout set to expire in 50 min 00:02:14.957 [Pipeline] { 00:02:14.970 [Pipeline] stage 00:02:14.972 [Pipeline] { (Tests) 00:02:14.982 [Pipeline] sh 00:02:15.266 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.266 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.267 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.267 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:15.267 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.267 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:15.267 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:15.267 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:15.267 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:15.267 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:15.267 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:15.267 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.267 + source /etc/os-release 00:02:15.267 ++ NAME='Fedora Linux' 00:02:15.267 ++ VERSION='38 (Cloud Edition)' 00:02:15.267 ++ ID=fedora 00:02:15.267 ++ VERSION_ID=38 00:02:15.267 ++ VERSION_CODENAME= 00:02:15.267 ++ PLATFORM_ID=platform:f38 00:02:15.267 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:15.267 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:15.267 ++ LOGO=fedora-logo-icon 00:02:15.267 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:15.267 ++ HOME_URL=https://fedoraproject.org/ 00:02:15.267 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:15.267 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:15.267 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:15.267 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:15.267 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:15.267 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:15.267 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:15.267 ++ SUPPORT_END=2024-05-14 00:02:15.267 ++ VARIANT='Cloud Edition' 00:02:15.267 ++ VARIANT_ID=cloud 00:02:15.267 + uname -a 00:02:15.267 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:15.267 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:18.572 Hugepages 00:02:18.572 node hugesize free / total 00:02:18.572 node0 1048576kB 0 / 0 00:02:18.572 node0 2048kB 0 / 0 00:02:18.572 node1 1048576kB 0 / 0 00:02:18.572 node1 2048kB 0 / 0 00:02:18.572 00:02:18.572 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:18.572 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:18.572 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:18.572 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:18.572 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:18.572 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:18.572 + rm -f /tmp/spdk-ld-path 00:02:18.572 + source autorun-spdk.conf 00:02:18.572 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.572 ++ SPDK_TEST_NVMF=1 00:02:18.572 ++ SPDK_TEST_NVME_CLI=1 00:02:18.572 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.572 ++ SPDK_TEST_NVMF_NICS=e810 00:02:18.572 ++ SPDK_TEST_VFIOUSER=1 00:02:18.572 ++ SPDK_RUN_UBSAN=1 00:02:18.572 ++ NET_TYPE=phy 00:02:18.572 ++ RUN_NIGHTLY=0 00:02:18.572 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:18.572 + [[ -n '' ]] 00:02:18.572 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.572 + for M in /var/spdk/build-*-manifest.txt 00:02:18.572 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:18.572 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.572 + for M in /var/spdk/build-*-manifest.txt 00:02:18.572 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:18.572 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.572 ++ uname 00:02:18.572 + [[ Linux == \L\i\n\u\x ]] 00:02:18.572 + sudo dmesg -T 00:02:18.572 + sudo dmesg --clear 00:02:18.572 + dmesg_pid=1093069 00:02:18.572 + [[ Fedora Linux == FreeBSD ]] 00:02:18.572 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.572 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.572 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:18.572 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:18.572 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:18.572 + [[ -x /usr/src/fio-static/fio ]] 00:02:18.572 + export FIO_BIN=/usr/src/fio-static/fio 00:02:18.572 + FIO_BIN=/usr/src/fio-static/fio 00:02:18.572 + sudo dmesg -Tw 00:02:18.572 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:18.572 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:18.572 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:18.572 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.572 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.572 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:18.572 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.572 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.572 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.572 Test configuration: 00:02:18.572 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.572 SPDK_TEST_NVMF=1 00:02:18.572 SPDK_TEST_NVME_CLI=1 00:02:18.572 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.572 SPDK_TEST_NVMF_NICS=e810 00:02:18.572 SPDK_TEST_VFIOUSER=1 00:02:18.572 SPDK_RUN_UBSAN=1 00:02:18.572 NET_TYPE=phy 00:02:18.572 RUN_NIGHTLY=0 16:41:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.572 16:41:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:18.572 16:41:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.572 16:41:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.572 16:41:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.572 16:41:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.572 16:41:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.572 16:41:38 -- paths/export.sh@5 -- $ export PATH 00:02:18.572 16:41:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.572 16:41:38 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.572 16:41:38 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:18.572 16:41:38 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721918498.XXXXXX 00:02:18.572 16:41:38 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721918498.GqLDHx 00:02:18.572 16:41:38 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:18.572 16:41:38 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:18.572 16:41:38 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:18.572 16:41:38 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:18.572 16:41:38 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.572 16:41:38 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:18.572 16:41:38 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:18.572 16:41:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.572 16:41:38 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:18.572 16:41:38 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:18.572 16:41:38 -- pm/common@17 -- $ local monitor 00:02:18.572 16:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.572 16:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.572 16:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.572 16:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.572 16:41:38 -- pm/common@21 -- $ date +%s 00:02:18.572 16:41:38 -- pm/common@25 -- $ sleep 1 00:02:18.572 16:41:38 -- pm/common@21 -- $ date +%s 00:02:18.572 16:41:38 -- pm/common@21 -- $ date +%s 00:02:18.572 16:41:38 -- pm/common@21 -- $ date +%s 00:02:18.572 16:41:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721918498 00:02:18.572 16:41:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721918498 00:02:18.573 16:41:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721918498 00:02:18.573 16:41:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721918498 00:02:18.573 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721918498_collect-vmstat.pm.log 00:02:18.573 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721918498_collect-cpu-load.pm.log 00:02:18.573 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721918498_collect-bmc-pm.bmc.pm.log 00:02:18.573 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721918498_collect-cpu-temp.pm.log 00:02:19.517 16:41:39 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:19.517 16:41:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.517 16:41:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.517 16:41:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.517 16:41:39 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.517 Thu Jul 25 02:41:39 PM UTC 2024 00:02:19.517 16:41:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.517 v24.09-pre-322-g7b27bb4a4 00:02:19.517 16:41:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:19.517 16:41:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.517 16:41:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.517 16:41:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:19.517 16:41:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:19.517 16:41:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.779 ************************************ 00:02:19.779 START TEST ubsan 00:02:19.779 ************************************ 00:02:19.779 16:41:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:19.779 using ubsan 00:02:19.779 00:02:19.779 real 0m0.001s 00:02:19.779 user 0m0.000s 00:02:19.779 sys 0m0.000s 00:02:19.779 16:41:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:19.779 16:41:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.779 ************************************ 00:02:19.779 END TEST ubsan 00:02:19.779 ************************************ 00:02:19.779 16:41:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:19.779 16:41:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:19.779 16:41:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:19.779 16:41:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:19.779 16:41:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:19.779 16:41:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:19.779 16:41:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:19.779 16:41:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:19.779 16:41:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:19.779 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:19.779 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:20.353 Using 'verbs' RDMA provider 00:02:36.237 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:48.479 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:48.479 Creating mk/config.mk...done. 00:02:48.479 Creating mk/cc.flags.mk...done. 00:02:48.479 Type 'make' to build. 00:02:48.479 16:42:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:48.479 16:42:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:48.479 16:42:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:48.479 16:42:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.479 ************************************ 00:02:48.479 START TEST make 00:02:48.479 ************************************ 00:02:48.479 16:42:08 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:49.052 make[1]: Nothing to be done for 'all'. 00:02:50.479 The Meson build system 00:02:50.480 Version: 1.3.1 00:02:50.480 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:50.480 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.480 Build type: native build 00:02:50.480 Project name: libvfio-user 00:02:50.480 Project version: 0.0.1 00:02:50.480 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:50.480 C linker for the host machine: cc ld.bfd 2.39-16 00:02:50.480 Host machine cpu family: x86_64 00:02:50.480 Host machine cpu: x86_64 00:02:50.480 Run-time dependency threads found: YES 00:02:50.480 Library dl found: YES 00:02:50.480 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:50.480 Run-time dependency json-c found: YES 0.17 00:02:50.480 Run-time dependency cmocka found: YES 1.1.7 00:02:50.480 Program pytest-3 found: NO 00:02:50.480 Program flake8 found: NO 00:02:50.480 Program misspell-fixer found: NO 00:02:50.480 Program restructuredtext-lint found: NO 00:02:50.480 Program valgrind found: YES (/usr/bin/valgrind) 00:02:50.480 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.480 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.480 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.480 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:50.480 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:50.480 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:50.480 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:50.480 Build targets in project: 8 00:02:50.480 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:50.480 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:50.480 00:02:50.480 libvfio-user 0.0.1 00:02:50.480 00:02:50.480 User defined options 00:02:50.480 buildtype : debug 00:02:50.480 default_library: shared 00:02:50.480 libdir : /usr/local/lib 00:02:50.480 00:02:50.480 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.480 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.739 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:50.739 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:50.739 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:50.739 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:50.739 [5/37] Compiling C object samples/null.p/null.c.o 00:02:50.739 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:50.739 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:50.739 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:50.739 [9/37] Compiling C object samples/server.p/server.c.o 00:02:50.739 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:50.739 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:50.739 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:50.739 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:50.739 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:50.739 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:50.739 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:50.739 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:50.739 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:50.739 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:50.739 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:50.739 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:50.739 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:50.739 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:50.739 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:50.739 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:50.739 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:50.739 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:50.739 [28/37] Compiling C object samples/client.p/client.c.o 00:02:50.739 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:50.739 [30/37] Linking target test/unit_tests 00:02:50.739 [31/37] Linking target samples/client 00:02:51.000 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:51.000 [33/37] Linking target samples/gpio-pci-idio-16 00:02:51.000 [34/37] Linking target samples/null 00:02:51.000 [35/37] Linking target samples/server 00:02:51.000 [36/37] Linking target samples/lspci 00:02:51.000 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:51.000 INFO: autodetecting backend as ninja 00:02:51.000 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.000 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.265 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:51.265 ninja: no work to do. 00:02:57.869 The Meson build system 00:02:57.869 Version: 1.3.1 00:02:57.869 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:57.869 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:57.869 Build type: native build 00:02:57.869 Program cat found: YES (/usr/bin/cat) 00:02:57.869 Project name: DPDK 00:02:57.869 Project version: 24.03.0 00:02:57.869 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:57.869 C linker for the host machine: cc ld.bfd 2.39-16 00:02:57.869 Host machine cpu family: x86_64 00:02:57.869 Host machine cpu: x86_64 00:02:57.869 Message: ## Building in Developer Mode ## 00:02:57.869 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:57.869 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:57.869 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:57.869 Program python3 found: YES (/usr/bin/python3) 00:02:57.869 Program cat found: YES (/usr/bin/cat) 00:02:57.869 Compiler for C supports arguments -march=native: YES 00:02:57.869 Checking for size of "void *" : 8 00:02:57.869 Checking for size of "void *" : 8 (cached) 00:02:57.869 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:57.869 Library m found: YES 00:02:57.869 Library numa found: YES 00:02:57.869 Has header "numaif.h" : YES 00:02:57.869 Library fdt found: NO 00:02:57.869 Library execinfo found: NO 00:02:57.869 Has header "execinfo.h" : YES 00:02:57.869 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:57.869 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:57.869 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:57.869 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:57.869 Run-time dependency openssl found: YES 3.0.9 00:02:57.869 Run-time dependency libpcap found: YES 1.10.4 00:02:57.869 Has header "pcap.h" with dependency libpcap: YES 00:02:57.869 Compiler for C supports arguments -Wcast-qual: YES 00:02:57.869 Compiler for C supports arguments -Wdeprecated: YES 00:02:57.869 Compiler for C supports arguments -Wformat: YES 00:02:57.869 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:57.869 Compiler for C supports arguments -Wformat-security: NO 00:02:57.869 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.869 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:57.869 Compiler for C supports arguments -Wnested-externs: YES 00:02:57.869 Compiler for C supports arguments -Wold-style-definition: YES 00:02:57.869 Compiler for C supports arguments -Wpointer-arith: YES 00:02:57.869 Compiler for C supports arguments -Wsign-compare: YES 00:02:57.869 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:57.869 Compiler for C supports arguments -Wundef: YES 00:02:57.869 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.869 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:57.869 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:57.869 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.869 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:57.869 Program objdump found: YES (/usr/bin/objdump) 00:02:57.869 Compiler for C supports arguments -mavx512f: YES 00:02:57.869 Checking if "AVX512 checking" compiles: YES 00:02:57.869 Fetching value of define "__SSE4_2__" : 1 00:02:57.869 Fetching value of define "__AES__" : 1 00:02:57.869 Fetching value of define "__AVX__" : 1 00:02:57.869 Fetching value of define "__AVX2__" : 1 00:02:57.869 Fetching value of define "__AVX512BW__" : 1 00:02:57.869 Fetching value of define "__AVX512CD__" : 1 00:02:57.869 Fetching value of define "__AVX512DQ__" : 1 00:02:57.869 Fetching value of define "__AVX512F__" : 1 00:02:57.869 Fetching value of define "__AVX512VL__" : 1 00:02:57.869 Fetching value of define "__PCLMUL__" : 1 00:02:57.869 Fetching value of define "__RDRND__" : 1 00:02:57.869 Fetching value of define "__RDSEED__" : 1 00:02:57.869 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:57.869 Fetching value of define "__znver1__" : (undefined) 00:02:57.869 Fetching value of define "__znver2__" : (undefined) 00:02:57.869 Fetching value of define "__znver3__" : (undefined) 00:02:57.869 Fetching value of define "__znver4__" : (undefined) 00:02:57.869 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:57.869 Message: lib/log: Defining dependency "log" 00:02:57.869 Message: lib/kvargs: Defining dependency "kvargs" 00:02:57.869 Message: lib/telemetry: Defining dependency "telemetry" 00:02:57.869 Checking for function "getentropy" : NO 00:02:57.869 Message: lib/eal: Defining dependency "eal" 00:02:57.869 Message: lib/ring: Defining dependency "ring" 00:02:57.869 Message: lib/rcu: Defining dependency "rcu" 00:02:57.869 Message: lib/mempool: Defining dependency "mempool" 00:02:57.869 Message: lib/mbuf: Defining dependency "mbuf" 00:02:57.869 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:57.869 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:57.869 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:57.869 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:57.869 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:57.869 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:57.869 Compiler for C supports arguments -mpclmul: YES 00:02:57.869 Compiler for C supports arguments -maes: YES 00:02:57.869 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:57.869 Compiler for C supports arguments -mavx512bw: YES 00:02:57.869 Compiler for C supports arguments -mavx512dq: YES 00:02:57.869 Compiler for C supports arguments -mavx512vl: YES 00:02:57.870 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:57.870 Compiler for C supports arguments -mavx2: YES 00:02:57.870 Compiler for C supports arguments -mavx: YES 00:02:57.870 Message: lib/net: Defining dependency "net" 00:02:57.870 Message: lib/meter: Defining dependency "meter" 00:02:57.870 Message: lib/ethdev: Defining dependency "ethdev" 00:02:57.870 Message: lib/pci: Defining dependency "pci" 00:02:57.870 Message: lib/cmdline: Defining dependency "cmdline" 00:02:57.870 Message: lib/hash: Defining dependency "hash" 00:02:57.870 Message: lib/timer: Defining dependency "timer" 00:02:57.870 Message: lib/compressdev: Defining dependency "compressdev" 00:02:57.870 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:57.870 Message: lib/dmadev: Defining dependency "dmadev" 00:02:57.870 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:57.870 Message: lib/power: Defining dependency "power" 00:02:57.870 Message: lib/reorder: Defining dependency "reorder" 00:02:57.870 Message: lib/security: Defining dependency "security" 00:02:57.870 Has header "linux/userfaultfd.h" : YES 00:02:57.870 Has header "linux/vduse.h" : YES 00:02:57.870 Message: lib/vhost: Defining dependency "vhost" 00:02:57.870 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:57.870 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:57.870 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:57.870 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:57.870 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:57.870 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:57.870 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:57.870 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:57.870 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:57.870 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:57.870 Program doxygen found: YES (/usr/bin/doxygen) 00:02:57.870 Configuring doxy-api-html.conf using configuration 00:02:57.870 Configuring doxy-api-man.conf using configuration 00:02:57.870 Program mandb found: YES (/usr/bin/mandb) 00:02:57.870 Program sphinx-build found: NO 00:02:57.870 Configuring rte_build_config.h using configuration 00:02:57.870 Message: 00:02:57.870 ================= 00:02:57.870 Applications Enabled 00:02:57.870 ================= 00:02:57.870 00:02:57.870 apps: 00:02:57.870 00:02:57.870 00:02:57.870 Message: 00:02:57.870 ================= 00:02:57.870 Libraries Enabled 00:02:57.870 ================= 00:02:57.870 00:02:57.870 libs: 00:02:57.870 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:57.870 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:57.870 cryptodev, dmadev, power, reorder, security, vhost, 00:02:57.870 00:02:57.870 Message: 00:02:57.870 =============== 00:02:57.870 Drivers Enabled 00:02:57.870 =============== 00:02:57.870 00:02:57.870 common: 00:02:57.870 00:02:57.870 bus: 00:02:57.870 pci, vdev, 00:02:57.870 mempool: 00:02:57.870 ring, 00:02:57.870 dma: 00:02:57.870 00:02:57.870 net: 00:02:57.870 00:02:57.870 crypto: 00:02:57.870 00:02:57.870 compress: 00:02:57.870 00:02:57.870 vdpa: 00:02:57.870 00:02:57.870 00:02:57.870 Message: 00:02:57.870 ================= 00:02:57.870 Content Skipped 00:02:57.870 ================= 00:02:57.870 00:02:57.870 apps: 00:02:57.870 dumpcap: explicitly disabled via build config 00:02:57.870 graph: explicitly disabled via build config 00:02:57.870 pdump: explicitly disabled via build config 00:02:57.870 proc-info: explicitly disabled via build config 00:02:57.870 test-acl: explicitly disabled via build config 00:02:57.870 test-bbdev: explicitly disabled via build config 00:02:57.870 test-cmdline: explicitly disabled via build config 00:02:57.870 test-compress-perf: explicitly disabled via build config 00:02:57.870 test-crypto-perf: explicitly disabled via build config 00:02:57.870 test-dma-perf: explicitly disabled via build config 00:02:57.870 test-eventdev: explicitly disabled via build config 00:02:57.870 test-fib: explicitly disabled via build config 00:02:57.870 test-flow-perf: explicitly disabled via build config 00:02:57.870 test-gpudev: explicitly disabled via build config 00:02:57.870 test-mldev: explicitly disabled via build config 00:02:57.870 test-pipeline: explicitly disabled via build config 00:02:57.870 test-pmd: explicitly disabled via build config 00:02:57.870 test-regex: explicitly disabled via build config 00:02:57.870 test-sad: explicitly disabled via build config 00:02:57.870 test-security-perf: explicitly disabled via build config 00:02:57.870 00:02:57.870 libs: 00:02:57.870 argparse: explicitly disabled via build config 00:02:57.870 metrics: explicitly disabled via build config 00:02:57.870 acl: explicitly disabled via build config 00:02:57.870 bbdev: explicitly disabled via build config 00:02:57.870 bitratestats: explicitly disabled via build config 00:02:57.870 bpf: explicitly disabled via build config 00:02:57.870 cfgfile: explicitly disabled via build config 00:02:57.870 distributor: explicitly disabled via build config 00:02:57.870 efd: explicitly disabled via build config 00:02:57.870 eventdev: explicitly disabled via build config 00:02:57.870 dispatcher: explicitly disabled via build config 00:02:57.870 gpudev: explicitly disabled via build config 00:02:57.870 gro: explicitly disabled via build config 00:02:57.870 gso: explicitly disabled via build config 00:02:57.870 ip_frag: explicitly disabled via build config 00:02:57.870 jobstats: explicitly disabled via build config 00:02:57.870 latencystats: explicitly disabled via build config 00:02:57.870 lpm: explicitly disabled via build config 00:02:57.870 member: explicitly disabled via build config 00:02:57.870 pcapng: explicitly disabled via build config 00:02:57.870 rawdev: explicitly disabled via build config 00:02:57.870 regexdev: explicitly disabled via build config 00:02:57.870 mldev: explicitly disabled via build config 00:02:57.870 rib: explicitly disabled via build config 00:02:57.870 sched: explicitly disabled via build config 00:02:57.870 stack: explicitly disabled via build config 00:02:57.870 ipsec: explicitly disabled via build config 00:02:57.870 pdcp: explicitly disabled via build config 00:02:57.870 fib: explicitly disabled via build config 00:02:57.870 port: explicitly disabled via build config 00:02:57.870 pdump: explicitly disabled via build config 00:02:57.870 table: explicitly disabled via build config 00:02:57.870 pipeline: explicitly disabled via build config 00:02:57.870 graph: explicitly disabled via build config 00:02:57.870 node: explicitly disabled via build config 00:02:57.870 00:02:57.870 drivers: 00:02:57.870 common/cpt: not in enabled drivers build config 00:02:57.870 common/dpaax: not in enabled drivers build config 00:02:57.870 common/iavf: not in enabled drivers build config 00:02:57.870 common/idpf: not in enabled drivers build config 00:02:57.870 common/ionic: not in enabled drivers build config 00:02:57.870 common/mvep: not in enabled drivers build config 00:02:57.870 common/octeontx: not in enabled drivers build config 00:02:57.870 bus/auxiliary: not in enabled drivers build config 00:02:57.870 bus/cdx: not in enabled drivers build config 00:02:57.870 bus/dpaa: not in enabled drivers build config 00:02:57.870 bus/fslmc: not in enabled drivers build config 00:02:57.870 bus/ifpga: not in enabled drivers build config 00:02:57.870 bus/platform: not in enabled drivers build config 00:02:57.870 bus/uacce: not in enabled drivers build config 00:02:57.870 bus/vmbus: not in enabled drivers build config 00:02:57.870 common/cnxk: not in enabled drivers build config 00:02:57.870 common/mlx5: not in enabled drivers build config 00:02:57.870 common/nfp: not in enabled drivers build config 00:02:57.870 common/nitrox: not in enabled drivers build config 00:02:57.870 common/qat: not in enabled drivers build config 00:02:57.870 common/sfc_efx: not in enabled drivers build config 00:02:57.870 mempool/bucket: not in enabled drivers build config 00:02:57.870 mempool/cnxk: not in enabled drivers build config 00:02:57.870 mempool/dpaa: not in enabled drivers build config 00:02:57.870 mempool/dpaa2: not in enabled drivers build config 00:02:57.870 mempool/octeontx: not in enabled drivers build config 00:02:57.870 mempool/stack: not in enabled drivers build config 00:02:57.870 dma/cnxk: not in enabled drivers build config 00:02:57.870 dma/dpaa: not in enabled drivers build config 00:02:57.870 dma/dpaa2: not in enabled drivers build config 00:02:57.870 dma/hisilicon: not in enabled drivers build config 00:02:57.870 dma/idxd: not in enabled drivers build config 00:02:57.870 dma/ioat: not in enabled drivers build config 00:02:57.870 dma/skeleton: not in enabled drivers build config 00:02:57.870 net/af_packet: not in enabled drivers build config 00:02:57.870 net/af_xdp: not in enabled drivers build config 00:02:57.870 net/ark: not in enabled drivers build config 00:02:57.870 net/atlantic: not in enabled drivers build config 00:02:57.870 net/avp: not in enabled drivers build config 00:02:57.870 net/axgbe: not in enabled drivers build config 00:02:57.870 net/bnx2x: not in enabled drivers build config 00:02:57.870 net/bnxt: not in enabled drivers build config 00:02:57.870 net/bonding: not in enabled drivers build config 00:02:57.870 net/cnxk: not in enabled drivers build config 00:02:57.870 net/cpfl: not in enabled drivers build config 00:02:57.870 net/cxgbe: not in enabled drivers build config 00:02:57.870 net/dpaa: not in enabled drivers build config 00:02:57.870 net/dpaa2: not in enabled drivers build config 00:02:57.870 net/e1000: not in enabled drivers build config 00:02:57.870 net/ena: not in enabled drivers build config 00:02:57.870 net/enetc: not in enabled drivers build config 00:02:57.870 net/enetfec: not in enabled drivers build config 00:02:57.870 net/enic: not in enabled drivers build config 00:02:57.870 net/failsafe: not in enabled drivers build config 00:02:57.870 net/fm10k: not in enabled drivers build config 00:02:57.871 net/gve: not in enabled drivers build config 00:02:57.871 net/hinic: not in enabled drivers build config 00:02:57.871 net/hns3: not in enabled drivers build config 00:02:57.871 net/i40e: not in enabled drivers build config 00:02:57.871 net/iavf: not in enabled drivers build config 00:02:57.871 net/ice: not in enabled drivers build config 00:02:57.871 net/idpf: not in enabled drivers build config 00:02:57.871 net/igc: not in enabled drivers build config 00:02:57.871 net/ionic: not in enabled drivers build config 00:02:57.871 net/ipn3ke: not in enabled drivers build config 00:02:57.871 net/ixgbe: not in enabled drivers build config 00:02:57.871 net/mana: not in enabled drivers build config 00:02:57.871 net/memif: not in enabled drivers build config 00:02:57.871 net/mlx4: not in enabled drivers build config 00:02:57.871 net/mlx5: not in enabled drivers build config 00:02:57.871 net/mvneta: not in enabled drivers build config 00:02:57.871 net/mvpp2: not in enabled drivers build config 00:02:57.871 net/netvsc: not in enabled drivers build config 00:02:57.871 net/nfb: not in enabled drivers build config 00:02:57.871 net/nfp: not in enabled drivers build config 00:02:57.871 net/ngbe: not in enabled drivers build config 00:02:57.871 net/null: not in enabled drivers build config 00:02:57.871 net/octeontx: not in enabled drivers build config 00:02:57.871 net/octeon_ep: not in enabled drivers build config 00:02:57.871 net/pcap: not in enabled drivers build config 00:02:57.871 net/pfe: not in enabled drivers build config 00:02:57.871 net/qede: not in enabled drivers build config 00:02:57.871 net/ring: not in enabled drivers build config 00:02:57.871 net/sfc: not in enabled drivers build config 00:02:57.871 net/softnic: not in enabled drivers build config 00:02:57.871 net/tap: not in enabled drivers build config 00:02:57.871 net/thunderx: not in enabled drivers build config 00:02:57.871 net/txgbe: not in enabled drivers build config 00:02:57.871 net/vdev_netvsc: not in enabled drivers build config 00:02:57.871 net/vhost: not in enabled drivers build config 00:02:57.871 net/virtio: not in enabled drivers build config 00:02:57.871 net/vmxnet3: not in enabled drivers build config 00:02:57.871 raw/*: missing internal dependency, "rawdev" 00:02:57.871 crypto/armv8: not in enabled drivers build config 00:02:57.871 crypto/bcmfs: not in enabled drivers build config 00:02:57.871 crypto/caam_jr: not in enabled drivers build config 00:02:57.871 crypto/ccp: not in enabled drivers build config 00:02:57.871 crypto/cnxk: not in enabled drivers build config 00:02:57.871 crypto/dpaa_sec: not in enabled drivers build config 00:02:57.871 crypto/dpaa2_sec: not in enabled drivers build config 00:02:57.871 crypto/ipsec_mb: not in enabled drivers build config 00:02:57.871 crypto/mlx5: not in enabled drivers build config 00:02:57.871 crypto/mvsam: not in enabled drivers build config 00:02:57.871 crypto/nitrox: not in enabled drivers build config 00:02:57.871 crypto/null: not in enabled drivers build config 00:02:57.871 crypto/octeontx: not in enabled drivers build config 00:02:57.871 crypto/openssl: not in enabled drivers build config 00:02:57.871 crypto/scheduler: not in enabled drivers build config 00:02:57.871 crypto/uadk: not in enabled drivers build config 00:02:57.871 crypto/virtio: not in enabled drivers build config 00:02:57.871 compress/isal: not in enabled drivers build config 00:02:57.871 compress/mlx5: not in enabled drivers build config 00:02:57.871 compress/nitrox: not in enabled drivers build config 00:02:57.871 compress/octeontx: not in enabled drivers build config 00:02:57.871 compress/zlib: not in enabled drivers build config 00:02:57.871 regex/*: missing internal dependency, "regexdev" 00:02:57.871 ml/*: missing internal dependency, "mldev" 00:02:57.871 vdpa/ifc: not in enabled drivers build config 00:02:57.871 vdpa/mlx5: not in enabled drivers build config 00:02:57.871 vdpa/nfp: not in enabled drivers build config 00:02:57.871 vdpa/sfc: not in enabled drivers build config 00:02:57.871 event/*: missing internal dependency, "eventdev" 00:02:57.871 baseband/*: missing internal dependency, "bbdev" 00:02:57.871 gpu/*: missing internal dependency, "gpudev" 00:02:57.871 00:02:57.871 00:02:57.871 Build targets in project: 84 00:02:57.871 00:02:57.871 DPDK 24.03.0 00:02:57.871 00:02:57.871 User defined options 00:02:57.871 buildtype : debug 00:02:57.871 default_library : shared 00:02:57.871 libdir : lib 00:02:57.871 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:57.871 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:57.871 c_link_args : 00:02:57.871 cpu_instruction_set: native 00:02:57.871 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:57.871 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:57.871 enable_docs : false 00:02:57.871 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:57.871 enable_kmods : false 00:02:57.871 max_lcores : 128 00:02:57.871 tests : false 00:02:57.871 00:02:57.871 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.871 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:57.871 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.871 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.871 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.871 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:57.871 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.871 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.871 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:57.871 [8/267] Linking static target lib/librte_kvargs.a 00:02:57.871 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.871 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.871 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.871 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.871 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.193 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:58.193 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.193 [16/267] Linking static target lib/librte_log.a 00:02:58.193 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:58.193 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:58.193 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.193 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:58.193 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:58.193 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:58.193 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:58.193 [24/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:58.193 [25/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.193 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:58.193 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:58.193 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.193 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.193 [30/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.193 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:58.193 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.193 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:58.193 [34/267] Linking static target lib/librte_pci.a 00:02:58.193 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:58.193 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.193 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.193 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:58.469 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:58.469 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:58.469 [41/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.469 [42/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.469 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.469 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:58.469 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:58.469 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:58.469 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:58.469 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:58.469 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.469 [50/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:58.469 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:58.469 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.469 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.469 [54/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.469 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:58.469 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:58.469 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.469 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.469 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:58.469 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.469 [61/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.469 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.469 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.469 [64/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.469 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.469 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.469 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:58.469 [68/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:58.469 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.469 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.469 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.469 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:58.469 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:58.469 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:58.469 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.469 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.469 [77/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.469 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:58.469 [79/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:58.469 [80/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.469 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:58.469 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:58.469 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.469 [84/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.469 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.469 [86/267] Linking static target lib/librte_ring.a 00:02:58.469 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:58.469 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.469 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.469 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.469 [91/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:58.469 [92/267] Linking static target lib/librte_meter.a 00:02:58.469 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.469 [94/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.469 [95/267] Linking static target lib/librte_telemetry.a 00:02:58.469 [96/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.469 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.469 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:58.469 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.469 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:58.469 [101/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.469 [102/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:58.469 [103/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.469 [104/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.469 [105/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.469 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.469 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.469 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.469 [109/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:58.469 [110/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.469 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:58.469 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.469 [113/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.469 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:58.469 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.469 [116/267] Linking static target lib/librte_cmdline.a 00:02:58.469 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.469 [118/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.469 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:58.730 [120/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.731 [121/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.731 [122/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.731 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.731 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.731 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.731 [126/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.731 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.731 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.731 [129/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:58.731 [130/267] Linking static target lib/librte_timer.a 00:02:58.731 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.731 [132/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.731 [133/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.731 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.731 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.731 [136/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:58.731 [137/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:58.731 [138/267] Linking static target lib/librte_rcu.a 00:02:58.731 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.731 [140/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:58.731 [141/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.731 [142/267] Linking target lib/librte_log.so.24.1 00:02:58.731 [143/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.731 [144/267] Linking static target lib/librte_power.a 00:02:58.731 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.731 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:58.731 [147/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.731 [148/267] Linking static target lib/librte_mempool.a 00:02:58.731 [149/267] Linking static target lib/librte_net.a 00:02:58.731 [150/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:58.731 [151/267] Linking static target lib/librte_dmadev.a 00:02:58.731 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.731 [153/267] Linking static target lib/librte_security.a 00:02:58.731 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.731 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.731 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.731 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:58.731 [158/267] Linking static target lib/librte_compressdev.a 00:02:58.731 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.731 [160/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.731 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.731 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.731 [163/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.731 [164/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.731 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:58.731 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.731 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:58.731 [168/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.731 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:58.731 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.731 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.731 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.731 [173/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:58.731 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:58.731 [175/267] Linking static target lib/librte_eal.a 00:02:58.731 [176/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:58.731 [177/267] Linking static target lib/librte_reorder.a 00:02:58.731 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.731 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.731 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.731 [181/267] Linking target lib/librte_kvargs.so.24.1 00:02:58.731 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.731 [183/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.731 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.993 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.993 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.993 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.993 [188/267] Linking static target lib/librte_mbuf.a 00:02:58.993 [189/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.993 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.993 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.993 [192/267] Linking static target lib/librte_hash.a 00:02:58.993 [193/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.993 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.993 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.993 [196/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.993 [197/267] Linking static target drivers/librte_bus_vdev.a 00:02:58.993 [198/267] Linking static target drivers/librte_bus_pci.a 00:02:58.993 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.993 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.993 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.993 [202/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.993 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.993 [204/267] Linking static target lib/librte_cryptodev.a 00:02:58.993 [205/267] Linking static target drivers/librte_mempool_ring.a 00:02:58.993 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.993 [207/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.993 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.993 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.993 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.255 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:59.255 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.255 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:59.255 [214/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.255 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.517 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.517 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.517 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.517 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.517 [220/267] Linking static target lib/librte_ethdev.a 00:02:59.517 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.517 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.778 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.778 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.778 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.778 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.351 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.351 [228/267] Linking static target lib/librte_vhost.a 00:03:01.296 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.241 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.836 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.780 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.780 [233/267] Linking target lib/librte_eal.so.24.1 00:03:10.041 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:10.041 [235/267] Linking target lib/librte_timer.so.24.1 00:03:10.041 [236/267] Linking target lib/librte_ring.so.24.1 00:03:10.041 [237/267] Linking target lib/librte_pci.so.24.1 00:03:10.041 [238/267] Linking target lib/librte_meter.so.24.1 00:03:10.041 [239/267] Linking target lib/librte_dmadev.so.24.1 00:03:10.041 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:10.041 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:10.041 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:10.041 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:10.041 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:10.041 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:10.302 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:10.302 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:10.302 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:10.302 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:10.302 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:10.302 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.302 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:10.563 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.563 [254/267] Linking target lib/librte_reorder.so.24.1 00:03:10.563 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:10.563 [256/267] Linking target lib/librte_net.so.24.1 00:03:10.563 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:10.563 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.563 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.823 [260/267] Linking target lib/librte_cmdline.so.24.1 00:03:10.823 [261/267] Linking target lib/librte_hash.so.24.1 00:03:10.823 [262/267] Linking target lib/librte_security.so.24.1 00:03:10.823 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:10.823 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:10.823 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:10.823 [266/267] Linking target lib/librte_power.so.24.1 00:03:11.083 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:11.083 INFO: autodetecting backend as ninja 00:03:11.083 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:16.374 CC lib/ut_mock/mock.o 00:03:16.374 CC lib/log/log.o 00:03:16.374 CC lib/log/log_flags.o 00:03:16.374 CC lib/ut/ut.o 00:03:16.374 CC lib/log/log_deprecated.o 00:03:16.374 LIB libspdk_log.a 00:03:16.374 LIB libspdk_ut.a 00:03:16.374 LIB libspdk_ut_mock.a 00:03:16.374 SO libspdk_ut.so.2.0 00:03:16.374 SO libspdk_log.so.7.0 00:03:16.374 SO libspdk_ut_mock.so.6.0 00:03:16.374 SYMLINK libspdk_ut.so 00:03:16.374 SYMLINK libspdk_ut_mock.so 00:03:16.374 SYMLINK libspdk_log.so 00:03:16.374 CC lib/util/base64.o 00:03:16.374 CC lib/util/bit_array.o 00:03:16.374 CC lib/util/crc16.o 00:03:16.374 CC lib/util/cpuset.o 00:03:16.374 CC lib/util/crc32.o 00:03:16.374 CC lib/util/crc32c.o 00:03:16.374 CC lib/util/crc32_ieee.o 00:03:16.374 CXX lib/trace_parser/trace.o 00:03:16.374 CC lib/util/crc64.o 00:03:16.374 CC lib/util/fd.o 00:03:16.374 CC lib/util/dif.o 00:03:16.374 CC lib/util/fd_group.o 00:03:16.374 CC lib/util/file.o 00:03:16.374 CC lib/util/hexlify.o 00:03:16.374 CC lib/util/math.o 00:03:16.374 CC lib/util/iov.o 00:03:16.374 CC lib/util/net.o 00:03:16.374 CC lib/util/pipe.o 00:03:16.374 CC lib/util/strerror_tls.o 00:03:16.374 CC lib/util/string.o 00:03:16.374 CC lib/util/uuid.o 00:03:16.374 CC lib/util/xor.o 00:03:16.374 CC lib/util/zipf.o 00:03:16.374 CC lib/dma/dma.o 00:03:16.374 CC lib/ioat/ioat.o 00:03:16.374 LIB libspdk_dma.a 00:03:16.374 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.374 CC lib/vfio_user/host/vfio_user.o 00:03:16.374 SO libspdk_dma.so.4.0 00:03:16.635 SYMLINK libspdk_dma.so 00:03:16.635 LIB libspdk_ioat.a 00:03:16.635 SO libspdk_ioat.so.7.0 00:03:16.635 SYMLINK libspdk_ioat.so 00:03:16.635 LIB libspdk_vfio_user.a 00:03:16.897 SO libspdk_vfio_user.so.5.0 00:03:16.897 LIB libspdk_util.a 00:03:16.897 SO libspdk_util.so.10.0 00:03:16.897 SYMLINK libspdk_vfio_user.so 00:03:16.897 SYMLINK libspdk_util.so 00:03:17.158 LIB libspdk_trace_parser.a 00:03:17.159 SO libspdk_trace_parser.so.5.0 00:03:17.159 SYMLINK libspdk_trace_parser.so 00:03:17.419 CC lib/rdma_provider/common.o 00:03:17.419 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.419 CC lib/env_dpdk/env.o 00:03:17.419 CC lib/json/json_parse.o 00:03:17.419 CC lib/env_dpdk/memory.o 00:03:17.419 CC lib/env_dpdk/pci.o 00:03:17.419 CC lib/env_dpdk/init.o 00:03:17.419 CC lib/json/json_util.o 00:03:17.419 CC lib/json/json_write.o 00:03:17.419 CC lib/env_dpdk/threads.o 00:03:17.419 CC lib/conf/conf.o 00:03:17.419 CC lib/env_dpdk/pci_ioat.o 00:03:17.419 CC lib/env_dpdk/pci_virtio.o 00:03:17.419 CC lib/env_dpdk/pci_vmd.o 00:03:17.419 CC lib/env_dpdk/pci_idxd.o 00:03:17.419 CC lib/idxd/idxd.o 00:03:17.419 CC lib/env_dpdk/pci_event.o 00:03:17.419 CC lib/env_dpdk/sigbus_handler.o 00:03:17.419 CC lib/idxd/idxd_user.o 00:03:17.419 CC lib/env_dpdk/pci_dpdk.o 00:03:17.419 CC lib/vmd/vmd.o 00:03:17.419 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.419 CC lib/idxd/idxd_kernel.o 00:03:17.419 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.419 CC lib/vmd/led.o 00:03:17.419 CC lib/rdma_utils/rdma_utils.o 00:03:17.419 LIB libspdk_rdma_provider.a 00:03:17.681 SO libspdk_rdma_provider.so.6.0 00:03:17.681 LIB libspdk_conf.a 00:03:17.681 LIB libspdk_rdma_utils.a 00:03:17.681 SO libspdk_conf.so.6.0 00:03:17.681 SYMLINK libspdk_rdma_provider.so 00:03:17.681 LIB libspdk_json.a 00:03:17.681 SO libspdk_json.so.6.0 00:03:17.681 SO libspdk_rdma_utils.so.1.0 00:03:17.681 SYMLINK libspdk_conf.so 00:03:17.681 SYMLINK libspdk_rdma_utils.so 00:03:17.681 SYMLINK libspdk_json.so 00:03:17.944 LIB libspdk_idxd.a 00:03:17.944 SO libspdk_idxd.so.12.0 00:03:17.944 LIB libspdk_vmd.a 00:03:17.944 SYMLINK libspdk_idxd.so 00:03:17.944 SO libspdk_vmd.so.6.0 00:03:17.944 SYMLINK libspdk_vmd.so 00:03:18.206 CC lib/jsonrpc/jsonrpc_server.o 00:03:18.206 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:18.206 CC lib/jsonrpc/jsonrpc_client.o 00:03:18.206 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:18.507 LIB libspdk_jsonrpc.a 00:03:18.507 SO libspdk_jsonrpc.so.6.0 00:03:18.507 SYMLINK libspdk_jsonrpc.so 00:03:18.507 LIB libspdk_env_dpdk.a 00:03:18.771 SO libspdk_env_dpdk.so.15.0 00:03:18.771 SYMLINK libspdk_env_dpdk.so 00:03:18.771 CC lib/rpc/rpc.o 00:03:19.032 LIB libspdk_rpc.a 00:03:19.032 SO libspdk_rpc.so.6.0 00:03:19.032 SYMLINK libspdk_rpc.so 00:03:19.606 CC lib/trace/trace.o 00:03:19.606 CC lib/trace/trace_flags.o 00:03:19.606 CC lib/trace/trace_rpc.o 00:03:19.606 CC lib/keyring/keyring.o 00:03:19.606 CC lib/notify/notify.o 00:03:19.606 CC lib/keyring/keyring_rpc.o 00:03:19.606 CC lib/notify/notify_rpc.o 00:03:19.606 LIB libspdk_notify.a 00:03:19.606 SO libspdk_notify.so.6.0 00:03:19.868 LIB libspdk_keyring.a 00:03:19.868 LIB libspdk_trace.a 00:03:19.868 SYMLINK libspdk_notify.so 00:03:19.868 SO libspdk_keyring.so.1.0 00:03:19.868 SO libspdk_trace.so.10.0 00:03:19.868 SYMLINK libspdk_keyring.so 00:03:19.868 SYMLINK libspdk_trace.so 00:03:20.129 CC lib/sock/sock.o 00:03:20.129 CC lib/sock/sock_rpc.o 00:03:20.129 CC lib/thread/thread.o 00:03:20.129 CC lib/thread/iobuf.o 00:03:20.705 LIB libspdk_sock.a 00:03:20.705 SO libspdk_sock.so.10.0 00:03:20.705 SYMLINK libspdk_sock.so 00:03:20.967 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.967 CC lib/nvme/nvme_ctrlr.o 00:03:20.967 CC lib/nvme/nvme_fabric.o 00:03:20.967 CC lib/nvme/nvme_ns_cmd.o 00:03:20.967 CC lib/nvme/nvme_ns.o 00:03:20.967 CC lib/nvme/nvme_pcie.o 00:03:20.967 CC lib/nvme/nvme_pcie_common.o 00:03:20.967 CC lib/nvme/nvme_qpair.o 00:03:20.967 CC lib/nvme/nvme.o 00:03:20.967 CC lib/nvme/nvme_quirks.o 00:03:20.967 CC lib/nvme/nvme_transport.o 00:03:20.967 CC lib/nvme/nvme_discovery.o 00:03:20.967 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.967 CC lib/nvme/nvme_tcp.o 00:03:20.967 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.967 CC lib/nvme/nvme_opal.o 00:03:20.967 CC lib/nvme/nvme_io_msg.o 00:03:20.967 CC lib/nvme/nvme_poll_group.o 00:03:20.967 CC lib/nvme/nvme_zns.o 00:03:20.967 CC lib/nvme/nvme_stubs.o 00:03:20.967 CC lib/nvme/nvme_auth.o 00:03:20.967 CC lib/nvme/nvme_cuse.o 00:03:20.967 CC lib/nvme/nvme_vfio_user.o 00:03:20.968 CC lib/nvme/nvme_rdma.o 00:03:21.540 LIB libspdk_thread.a 00:03:21.540 SO libspdk_thread.so.10.1 00:03:21.540 SYMLINK libspdk_thread.so 00:03:21.802 CC lib/virtio/virtio.o 00:03:21.802 CC lib/blob/blobstore.o 00:03:21.802 CC lib/blob/zeroes.o 00:03:21.802 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.802 CC lib/virtio/virtio_vhost_user.o 00:03:21.802 CC lib/blob/request.o 00:03:21.802 CC lib/vfu_tgt/tgt_rpc.o 00:03:21.802 CC lib/virtio/virtio_vfio_user.o 00:03:21.802 CC lib/blob/blob_bs_dev.o 00:03:21.802 CC lib/virtio/virtio_pci.o 00:03:21.802 CC lib/accel/accel.o 00:03:21.802 CC lib/accel/accel_rpc.o 00:03:21.802 CC lib/accel/accel_sw.o 00:03:21.802 CC lib/init/json_config.o 00:03:21.802 CC lib/init/subsystem.o 00:03:21.802 CC lib/init/subsystem_rpc.o 00:03:21.802 CC lib/init/rpc.o 00:03:22.376 LIB libspdk_init.a 00:03:22.376 SO libspdk_init.so.5.0 00:03:22.376 LIB libspdk_vfu_tgt.a 00:03:22.376 LIB libspdk_virtio.a 00:03:22.376 SO libspdk_vfu_tgt.so.3.0 00:03:22.376 SO libspdk_virtio.so.7.0 00:03:22.376 SYMLINK libspdk_init.so 00:03:22.376 SYMLINK libspdk_vfu_tgt.so 00:03:22.376 SYMLINK libspdk_virtio.so 00:03:22.637 CC lib/event/app.o 00:03:22.637 CC lib/event/reactor.o 00:03:22.637 CC lib/event/log_rpc.o 00:03:22.637 CC lib/event/app_rpc.o 00:03:22.637 CC lib/event/scheduler_static.o 00:03:22.899 LIB libspdk_accel.a 00:03:22.899 SO libspdk_accel.so.16.0 00:03:22.899 LIB libspdk_nvme.a 00:03:22.899 SYMLINK libspdk_accel.so 00:03:22.899 SO libspdk_nvme.so.13.1 00:03:23.161 LIB libspdk_event.a 00:03:23.161 SO libspdk_event.so.14.0 00:03:23.161 SYMLINK libspdk_event.so 00:03:23.161 CC lib/bdev/bdev.o 00:03:23.161 CC lib/bdev/bdev_rpc.o 00:03:23.161 CC lib/bdev/bdev_zone.o 00:03:23.161 CC lib/bdev/part.o 00:03:23.161 CC lib/bdev/scsi_nvme.o 00:03:23.422 SYMLINK libspdk_nvme.so 00:03:23.994 LIB libspdk_blob.a 00:03:24.255 SO libspdk_blob.so.11.0 00:03:24.255 SYMLINK libspdk_blob.so 00:03:24.516 CC lib/lvol/lvol.o 00:03:24.516 CC lib/blobfs/blobfs.o 00:03:24.516 CC lib/blobfs/tree.o 00:03:25.462 LIB libspdk_blobfs.a 00:03:25.462 SO libspdk_blobfs.so.10.0 00:03:25.462 LIB libspdk_lvol.a 00:03:25.462 SO libspdk_lvol.so.10.0 00:03:25.462 LIB libspdk_bdev.a 00:03:25.462 SYMLINK libspdk_blobfs.so 00:03:25.462 SYMLINK libspdk_lvol.so 00:03:25.462 SO libspdk_bdev.so.16.0 00:03:25.462 SYMLINK libspdk_bdev.so 00:03:26.033 CC lib/nbd/nbd_rpc.o 00:03:26.033 CC lib/nbd/nbd.o 00:03:26.033 CC lib/ublk/ublk.o 00:03:26.033 CC lib/ublk/ublk_rpc.o 00:03:26.033 CC lib/scsi/dev.o 00:03:26.033 CC lib/scsi/lun.o 00:03:26.033 CC lib/scsi/port.o 00:03:26.033 CC lib/nvmf/ctrlr.o 00:03:26.033 CC lib/scsi/scsi_bdev.o 00:03:26.033 CC lib/scsi/scsi.o 00:03:26.033 CC lib/nvmf/ctrlr_discovery.o 00:03:26.033 CC lib/nvmf/ctrlr_bdev.o 00:03:26.033 CC lib/scsi/scsi_pr.o 00:03:26.033 CC lib/nvmf/subsystem.o 00:03:26.033 CC lib/scsi/scsi_rpc.o 00:03:26.033 CC lib/ftl/ftl_core.o 00:03:26.033 CC lib/scsi/task.o 00:03:26.033 CC lib/ftl/ftl_init.o 00:03:26.033 CC lib/nvmf/nvmf.o 00:03:26.033 CC lib/ftl/ftl_layout.o 00:03:26.033 CC lib/nvmf/nvmf_rpc.o 00:03:26.033 CC lib/ftl/ftl_debug.o 00:03:26.033 CC lib/nvmf/transport.o 00:03:26.033 CC lib/ftl/ftl_io.o 00:03:26.033 CC lib/nvmf/tcp.o 00:03:26.033 CC lib/ftl/ftl_sb.o 00:03:26.033 CC lib/nvmf/stubs.o 00:03:26.033 CC lib/ftl/ftl_l2p.o 00:03:26.033 CC lib/nvmf/mdns_server.o 00:03:26.033 CC lib/ftl/ftl_l2p_flat.o 00:03:26.033 CC lib/nvmf/vfio_user.o 00:03:26.033 CC lib/ftl/ftl_nv_cache.o 00:03:26.033 CC lib/nvmf/rdma.o 00:03:26.033 CC lib/ftl/ftl_band.o 00:03:26.033 CC lib/nvmf/auth.o 00:03:26.033 CC lib/ftl/ftl_band_ops.o 00:03:26.033 CC lib/ftl/ftl_writer.o 00:03:26.033 CC lib/ftl/ftl_rq.o 00:03:26.033 CC lib/ftl/ftl_reloc.o 00:03:26.033 CC lib/ftl/ftl_l2p_cache.o 00:03:26.033 CC lib/ftl/ftl_p2l.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.033 CC lib/ftl/utils/ftl_conf.o 00:03:26.033 CC lib/ftl/utils/ftl_md.o 00:03:26.033 CC lib/ftl/utils/ftl_mempool.o 00:03:26.033 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.033 CC lib/ftl/utils/ftl_property.o 00:03:26.033 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.033 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.033 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.033 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.033 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.033 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.033 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:26.033 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.033 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.033 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.033 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.033 CC lib/ftl/base/ftl_base_dev.o 00:03:26.033 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.033 CC lib/ftl/ftl_trace.o 00:03:26.293 LIB libspdk_nbd.a 00:03:26.293 SO libspdk_nbd.so.7.0 00:03:26.554 SYMLINK libspdk_nbd.so 00:03:26.554 LIB libspdk_scsi.a 00:03:26.555 SO libspdk_scsi.so.9.0 00:03:26.555 LIB libspdk_ublk.a 00:03:26.555 SYMLINK libspdk_scsi.so 00:03:26.555 SO libspdk_ublk.so.3.0 00:03:26.816 SYMLINK libspdk_ublk.so 00:03:26.816 LIB libspdk_ftl.a 00:03:27.077 CC lib/iscsi/conn.o 00:03:27.077 CC lib/iscsi/init_grp.o 00:03:27.077 CC lib/iscsi/iscsi.o 00:03:27.077 CC lib/vhost/vhost.o 00:03:27.077 CC lib/iscsi/md5.o 00:03:27.077 CC lib/vhost/vhost_scsi.o 00:03:27.077 CC lib/vhost/vhost_rpc.o 00:03:27.077 CC lib/iscsi/param.o 00:03:27.077 CC lib/iscsi/portal_grp.o 00:03:27.077 CC lib/vhost/vhost_blk.o 00:03:27.077 CC lib/iscsi/tgt_node.o 00:03:27.077 CC lib/vhost/rte_vhost_user.o 00:03:27.077 CC lib/iscsi/iscsi_subsystem.o 00:03:27.077 CC lib/iscsi/iscsi_rpc.o 00:03:27.077 CC lib/iscsi/task.o 00:03:27.077 SO libspdk_ftl.so.9.0 00:03:27.339 SYMLINK libspdk_ftl.so 00:03:27.600 LIB libspdk_nvmf.a 00:03:27.862 SO libspdk_nvmf.so.19.0 00:03:27.862 LIB libspdk_vhost.a 00:03:27.862 SO libspdk_vhost.so.8.0 00:03:28.124 SYMLINK libspdk_nvmf.so 00:03:28.124 SYMLINK libspdk_vhost.so 00:03:28.124 LIB libspdk_iscsi.a 00:03:28.124 SO libspdk_iscsi.so.8.0 00:03:28.385 SYMLINK libspdk_iscsi.so 00:03:28.957 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.957 CC module/vfu_device/vfu_virtio.o 00:03:28.957 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.957 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.957 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.957 LIB libspdk_env_dpdk_rpc.a 00:03:28.957 CC module/accel/dsa/accel_dsa.o 00:03:28.957 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.957 CC module/accel/iaa/accel_iaa.o 00:03:28.957 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.957 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.957 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.957 CC module/sock/posix/posix.o 00:03:28.957 CC module/keyring/file/keyring.o 00:03:28.957 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.957 CC module/blob/bdev/blob_bdev.o 00:03:28.957 CC module/keyring/file/keyring_rpc.o 00:03:29.220 CC module/accel/error/accel_error.o 00:03:29.220 CC module/accel/ioat/accel_ioat.o 00:03:29.220 CC module/accel/error/accel_error_rpc.o 00:03:29.220 CC module/keyring/linux/keyring.o 00:03:29.220 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.220 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.220 CC module/keyring/linux/keyring_rpc.o 00:03:29.220 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.220 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.220 LIB libspdk_keyring_file.a 00:03:29.220 LIB libspdk_keyring_linux.a 00:03:29.220 LIB libspdk_scheduler_gscheduler.a 00:03:29.220 LIB libspdk_accel_iaa.a 00:03:29.220 LIB libspdk_scheduler_dynamic.a 00:03:29.220 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.220 LIB libspdk_accel_error.a 00:03:29.220 SO libspdk_keyring_file.so.1.0 00:03:29.220 LIB libspdk_accel_ioat.a 00:03:29.220 SO libspdk_keyring_linux.so.1.0 00:03:29.220 SO libspdk_accel_iaa.so.3.0 00:03:29.220 SO libspdk_scheduler_gscheduler.so.4.0 00:03:29.220 SO libspdk_scheduler_dynamic.so.4.0 00:03:29.220 LIB libspdk_accel_dsa.a 00:03:29.481 SO libspdk_accel_error.so.2.0 00:03:29.481 LIB libspdk_blob_bdev.a 00:03:29.481 SO libspdk_accel_ioat.so.6.0 00:03:29.481 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.481 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.481 SO libspdk_accel_dsa.so.5.0 00:03:29.481 SYMLINK libspdk_accel_iaa.so 00:03:29.481 SYMLINK libspdk_keyring_file.so 00:03:29.481 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.481 SYMLINK libspdk_keyring_linux.so 00:03:29.481 SO libspdk_blob_bdev.so.11.0 00:03:29.481 SYMLINK libspdk_accel_ioat.so 00:03:29.481 SYMLINK libspdk_accel_error.so 00:03:29.481 LIB libspdk_vfu_device.a 00:03:29.481 SYMLINK libspdk_blob_bdev.so 00:03:29.482 SYMLINK libspdk_accel_dsa.so 00:03:29.482 SO libspdk_vfu_device.so.3.0 00:03:29.743 SYMLINK libspdk_vfu_device.so 00:03:29.743 LIB libspdk_sock_posix.a 00:03:29.743 SO libspdk_sock_posix.so.6.0 00:03:30.004 SYMLINK libspdk_sock_posix.so 00:03:30.004 CC module/bdev/split/vbdev_split.o 00:03:30.004 CC module/bdev/split/vbdev_split_rpc.o 00:03:30.004 CC module/bdev/gpt/gpt.o 00:03:30.004 CC module/bdev/error/vbdev_error.o 00:03:30.004 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.004 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.004 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.004 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.004 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.004 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.004 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.004 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.004 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.004 CC module/bdev/malloc/bdev_malloc.o 00:03:30.004 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.004 CC module/bdev/delay/vbdev_delay.o 00:03:30.004 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.004 CC module/bdev/aio/bdev_aio.o 00:03:30.004 CC module/bdev/ftl/bdev_ftl.o 00:03:30.004 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.004 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.004 CC module/bdev/nvme/bdev_nvme.o 00:03:30.004 CC module/bdev/null/bdev_null.o 00:03:30.004 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.004 CC module/bdev/nvme/nvme_rpc.o 00:03:30.004 CC module/bdev/null/bdev_null_rpc.o 00:03:30.004 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.004 CC module/bdev/nvme/vbdev_opal.o 00:03:30.004 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.004 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.004 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:30.004 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.004 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:30.004 CC module/bdev/raid/bdev_raid.o 00:03:30.004 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.004 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.004 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.004 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.004 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.004 CC module/bdev/raid/raid0.o 00:03:30.004 CC module/bdev/raid/raid1.o 00:03:30.004 CC module/bdev/raid/concat.o 00:03:30.263 LIB libspdk_blobfs_bdev.a 00:03:30.263 LIB libspdk_bdev_split.a 00:03:30.263 SO libspdk_blobfs_bdev.so.6.0 00:03:30.263 LIB libspdk_bdev_error.a 00:03:30.263 LIB libspdk_bdev_null.a 00:03:30.263 SO libspdk_bdev_split.so.6.0 00:03:30.263 LIB libspdk_bdev_gpt.a 00:03:30.263 LIB libspdk_bdev_ftl.a 00:03:30.263 SO libspdk_bdev_error.so.6.0 00:03:30.263 SYMLINK libspdk_blobfs_bdev.so 00:03:30.263 SO libspdk_bdev_null.so.6.0 00:03:30.263 SO libspdk_bdev_gpt.so.6.0 00:03:30.263 LIB libspdk_bdev_passthru.a 00:03:30.263 SO libspdk_bdev_ftl.so.6.0 00:03:30.524 LIB libspdk_bdev_zone_block.a 00:03:30.524 LIB libspdk_bdev_aio.a 00:03:30.524 SYMLINK libspdk_bdev_split.so 00:03:30.524 LIB libspdk_bdev_malloc.a 00:03:30.524 SO libspdk_bdev_passthru.so.6.0 00:03:30.524 LIB libspdk_bdev_delay.a 00:03:30.524 SYMLINK libspdk_bdev_error.so 00:03:30.524 SYMLINK libspdk_bdev_null.so 00:03:30.524 LIB libspdk_bdev_iscsi.a 00:03:30.524 SO libspdk_bdev_aio.so.6.0 00:03:30.524 SO libspdk_bdev_zone_block.so.6.0 00:03:30.524 SYMLINK libspdk_bdev_gpt.so 00:03:30.524 SO libspdk_bdev_malloc.so.6.0 00:03:30.524 SO libspdk_bdev_delay.so.6.0 00:03:30.524 SYMLINK libspdk_bdev_ftl.so 00:03:30.524 SYMLINK libspdk_bdev_passthru.so 00:03:30.524 SO libspdk_bdev_iscsi.so.6.0 00:03:30.524 SYMLINK libspdk_bdev_aio.so 00:03:30.524 SYMLINK libspdk_bdev_malloc.so 00:03:30.524 SYMLINK libspdk_bdev_zone_block.so 00:03:30.524 LIB libspdk_bdev_lvol.a 00:03:30.524 LIB libspdk_bdev_virtio.a 00:03:30.524 SYMLINK libspdk_bdev_delay.so 00:03:30.524 SYMLINK libspdk_bdev_iscsi.so 00:03:30.524 SO libspdk_bdev_lvol.so.6.0 00:03:30.524 SO libspdk_bdev_virtio.so.6.0 00:03:30.524 SYMLINK libspdk_bdev_lvol.so 00:03:30.785 SYMLINK libspdk_bdev_virtio.so 00:03:31.047 LIB libspdk_bdev_raid.a 00:03:31.047 SO libspdk_bdev_raid.so.6.0 00:03:31.047 SYMLINK libspdk_bdev_raid.so 00:03:31.991 LIB libspdk_bdev_nvme.a 00:03:31.991 SO libspdk_bdev_nvme.so.7.0 00:03:31.991 SYMLINK libspdk_bdev_nvme.so 00:03:32.937 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.937 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.937 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.937 CC module/event/subsystems/vmd/vmd.o 00:03:32.937 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.937 CC module/event/subsystems/keyring/keyring.o 00:03:32.937 CC module/event/subsystems/sock/sock.o 00:03:32.937 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.937 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.937 LIB libspdk_event_vhost_blk.a 00:03:32.937 LIB libspdk_event_scheduler.a 00:03:32.937 LIB libspdk_event_keyring.a 00:03:32.937 LIB libspdk_event_vmd.a 00:03:32.937 LIB libspdk_event_sock.a 00:03:32.937 LIB libspdk_event_vfu_tgt.a 00:03:32.937 LIB libspdk_event_iobuf.a 00:03:32.937 SO libspdk_event_vhost_blk.so.3.0 00:03:32.937 SO libspdk_event_keyring.so.1.0 00:03:32.937 SO libspdk_event_scheduler.so.4.0 00:03:32.937 SO libspdk_event_vmd.so.6.0 00:03:32.937 SO libspdk_event_sock.so.5.0 00:03:32.937 SO libspdk_event_vfu_tgt.so.3.0 00:03:32.937 SO libspdk_event_iobuf.so.3.0 00:03:33.219 SYMLINK libspdk_event_vhost_blk.so 00:03:33.219 SYMLINK libspdk_event_keyring.so 00:03:33.219 SYMLINK libspdk_event_scheduler.so 00:03:33.219 SYMLINK libspdk_event_sock.so 00:03:33.219 SYMLINK libspdk_event_vmd.so 00:03:33.219 SYMLINK libspdk_event_vfu_tgt.so 00:03:33.219 SYMLINK libspdk_event_iobuf.so 00:03:33.487 CC module/event/subsystems/accel/accel.o 00:03:33.761 LIB libspdk_event_accel.a 00:03:33.761 SO libspdk_event_accel.so.6.0 00:03:33.761 SYMLINK libspdk_event_accel.so 00:03:34.077 CC module/event/subsystems/bdev/bdev.o 00:03:34.339 LIB libspdk_event_bdev.a 00:03:34.339 SO libspdk_event_bdev.so.6.0 00:03:34.339 SYMLINK libspdk_event_bdev.so 00:03:34.601 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.601 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.601 CC module/event/subsystems/scsi/scsi.o 00:03:34.601 CC module/event/subsystems/ublk/ublk.o 00:03:34.601 CC module/event/subsystems/nbd/nbd.o 00:03:34.863 LIB libspdk_event_ublk.a 00:03:34.863 LIB libspdk_event_nbd.a 00:03:34.863 LIB libspdk_event_scsi.a 00:03:34.863 SO libspdk_event_ublk.so.3.0 00:03:34.863 SO libspdk_event_nbd.so.6.0 00:03:34.863 LIB libspdk_event_nvmf.a 00:03:34.863 SO libspdk_event_scsi.so.6.0 00:03:34.863 SYMLINK libspdk_event_ublk.so 00:03:34.863 SO libspdk_event_nvmf.so.6.0 00:03:34.863 SYMLINK libspdk_event_nbd.so 00:03:35.125 SYMLINK libspdk_event_scsi.so 00:03:35.125 SYMLINK libspdk_event_nvmf.so 00:03:35.386 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.386 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.648 LIB libspdk_event_vhost_scsi.a 00:03:35.648 LIB libspdk_event_iscsi.a 00:03:35.648 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.648 SO libspdk_event_iscsi.so.6.0 00:03:35.648 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.648 SYMLINK libspdk_event_iscsi.so 00:03:35.909 SO libspdk.so.6.0 00:03:35.909 SYMLINK libspdk.so 00:03:36.172 CXX app/trace/trace.o 00:03:36.172 CC app/trace_record/trace_record.o 00:03:36.172 CC app/spdk_nvme_identify/identify.o 00:03:36.172 CC app/spdk_nvme_perf/perf.o 00:03:36.172 CC test/rpc_client/rpc_client_test.o 00:03:36.172 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.172 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.172 CC app/spdk_top/spdk_top.o 00:03:36.172 TEST_HEADER include/spdk/accel.h 00:03:36.172 TEST_HEADER include/spdk/accel_module.h 00:03:36.172 TEST_HEADER include/spdk/assert.h 00:03:36.172 TEST_HEADER include/spdk/barrier.h 00:03:36.172 TEST_HEADER include/spdk/bdev_module.h 00:03:36.172 CC app/spdk_lspci/spdk_lspci.o 00:03:36.172 TEST_HEADER include/spdk/base64.h 00:03:36.172 TEST_HEADER include/spdk/bdev.h 00:03:36.172 TEST_HEADER include/spdk/bdev_zone.h 00:03:36.172 TEST_HEADER include/spdk/bit_array.h 00:03:36.172 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:36.172 TEST_HEADER include/spdk/bit_pool.h 00:03:36.172 TEST_HEADER include/spdk/blob_bdev.h 00:03:36.172 TEST_HEADER include/spdk/blobfs.h 00:03:36.172 TEST_HEADER include/spdk/conf.h 00:03:36.172 TEST_HEADER include/spdk/blob.h 00:03:36.172 TEST_HEADER include/spdk/config.h 00:03:36.172 TEST_HEADER include/spdk/cpuset.h 00:03:36.172 TEST_HEADER include/spdk/crc16.h 00:03:36.172 TEST_HEADER include/spdk/crc32.h 00:03:36.172 TEST_HEADER include/spdk/crc64.h 00:03:36.172 TEST_HEADER include/spdk/dif.h 00:03:36.172 TEST_HEADER include/spdk/dma.h 00:03:36.172 TEST_HEADER include/spdk/env_dpdk.h 00:03:36.172 TEST_HEADER include/spdk/endian.h 00:03:36.172 TEST_HEADER include/spdk/env.h 00:03:36.172 TEST_HEADER include/spdk/fd_group.h 00:03:36.172 CC app/nvmf_tgt/nvmf_main.o 00:03:36.172 TEST_HEADER include/spdk/event.h 00:03:36.172 TEST_HEADER include/spdk/fd.h 00:03:36.172 TEST_HEADER include/spdk/ftl.h 00:03:36.172 CC app/spdk_dd/spdk_dd.o 00:03:36.172 TEST_HEADER include/spdk/file.h 00:03:36.172 TEST_HEADER include/spdk/gpt_spec.h 00:03:36.172 TEST_HEADER include/spdk/hexlify.h 00:03:36.172 TEST_HEADER include/spdk/histogram_data.h 00:03:36.172 TEST_HEADER include/spdk/idxd_spec.h 00:03:36.172 TEST_HEADER include/spdk/idxd.h 00:03:36.172 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.172 TEST_HEADER include/spdk/init.h 00:03:36.433 TEST_HEADER include/spdk/ioat.h 00:03:36.433 TEST_HEADER include/spdk/ioat_spec.h 00:03:36.433 TEST_HEADER include/spdk/iscsi_spec.h 00:03:36.433 TEST_HEADER include/spdk/json.h 00:03:36.433 TEST_HEADER include/spdk/jsonrpc.h 00:03:36.433 TEST_HEADER include/spdk/likely.h 00:03:36.433 TEST_HEADER include/spdk/keyring.h 00:03:36.433 TEST_HEADER include/spdk/log.h 00:03:36.433 TEST_HEADER include/spdk/keyring_module.h 00:03:36.433 TEST_HEADER include/spdk/mmio.h 00:03:36.433 TEST_HEADER include/spdk/lvol.h 00:03:36.433 TEST_HEADER include/spdk/memory.h 00:03:36.433 CC app/spdk_tgt/spdk_tgt.o 00:03:36.433 TEST_HEADER include/spdk/nbd.h 00:03:36.433 TEST_HEADER include/spdk/net.h 00:03:36.433 TEST_HEADER include/spdk/notify.h 00:03:36.433 TEST_HEADER include/spdk/nvme.h 00:03:36.433 TEST_HEADER include/spdk/nvme_intel.h 00:03:36.433 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:36.433 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:36.433 TEST_HEADER include/spdk/nvme_spec.h 00:03:36.433 TEST_HEADER include/spdk/nvme_zns.h 00:03:36.433 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:36.433 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:36.433 TEST_HEADER include/spdk/nvmf.h 00:03:36.433 TEST_HEADER include/spdk/nvmf_spec.h 00:03:36.433 TEST_HEADER include/spdk/nvmf_transport.h 00:03:36.433 TEST_HEADER include/spdk/opal.h 00:03:36.433 TEST_HEADER include/spdk/opal_spec.h 00:03:36.433 TEST_HEADER include/spdk/pci_ids.h 00:03:36.433 TEST_HEADER include/spdk/queue.h 00:03:36.433 TEST_HEADER include/spdk/reduce.h 00:03:36.433 TEST_HEADER include/spdk/pipe.h 00:03:36.433 TEST_HEADER include/spdk/rpc.h 00:03:36.433 TEST_HEADER include/spdk/scsi.h 00:03:36.433 TEST_HEADER include/spdk/scheduler.h 00:03:36.433 TEST_HEADER include/spdk/scsi_spec.h 00:03:36.433 TEST_HEADER include/spdk/sock.h 00:03:36.433 TEST_HEADER include/spdk/stdinc.h 00:03:36.433 TEST_HEADER include/spdk/string.h 00:03:36.433 TEST_HEADER include/spdk/thread.h 00:03:36.433 TEST_HEADER include/spdk/trace.h 00:03:36.433 TEST_HEADER include/spdk/trace_parser.h 00:03:36.433 TEST_HEADER include/spdk/tree.h 00:03:36.433 TEST_HEADER include/spdk/ublk.h 00:03:36.433 TEST_HEADER include/spdk/util.h 00:03:36.433 TEST_HEADER include/spdk/uuid.h 00:03:36.434 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:36.434 TEST_HEADER include/spdk/version.h 00:03:36.434 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:36.434 TEST_HEADER include/spdk/vmd.h 00:03:36.434 TEST_HEADER include/spdk/vhost.h 00:03:36.434 TEST_HEADER include/spdk/xor.h 00:03:36.434 TEST_HEADER include/spdk/zipf.h 00:03:36.434 CXX test/cpp_headers/accel.o 00:03:36.434 CXX test/cpp_headers/accel_module.o 00:03:36.434 CXX test/cpp_headers/assert.o 00:03:36.434 CXX test/cpp_headers/barrier.o 00:03:36.434 CXX test/cpp_headers/base64.o 00:03:36.434 CXX test/cpp_headers/bdev.o 00:03:36.434 CXX test/cpp_headers/bdev_module.o 00:03:36.434 CXX test/cpp_headers/bit_array.o 00:03:36.434 CXX test/cpp_headers/bdev_zone.o 00:03:36.434 CXX test/cpp_headers/bit_pool.o 00:03:36.434 CXX test/cpp_headers/blob_bdev.o 00:03:36.434 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.434 CXX test/cpp_headers/blob.o 00:03:36.434 CXX test/cpp_headers/blobfs.o 00:03:36.434 CXX test/cpp_headers/conf.o 00:03:36.434 CXX test/cpp_headers/config.o 00:03:36.434 CXX test/cpp_headers/cpuset.o 00:03:36.434 CXX test/cpp_headers/crc16.o 00:03:36.434 CXX test/cpp_headers/crc64.o 00:03:36.434 CXX test/cpp_headers/crc32.o 00:03:36.434 CXX test/cpp_headers/dif.o 00:03:36.434 CXX test/cpp_headers/dma.o 00:03:36.434 CC examples/util/zipf/zipf.o 00:03:36.434 CXX test/cpp_headers/endian.o 00:03:36.434 CXX test/cpp_headers/env_dpdk.o 00:03:36.434 CC examples/ioat/perf/perf.o 00:03:36.434 CXX test/cpp_headers/env.o 00:03:36.434 CXX test/cpp_headers/event.o 00:03:36.434 CXX test/cpp_headers/fd.o 00:03:36.434 CXX test/cpp_headers/fd_group.o 00:03:36.434 CXX test/cpp_headers/file.o 00:03:36.434 CXX test/cpp_headers/ftl.o 00:03:36.434 CC examples/ioat/verify/verify.o 00:03:36.434 CXX test/cpp_headers/gpt_spec.o 00:03:36.434 CXX test/cpp_headers/hexlify.o 00:03:36.434 CXX test/cpp_headers/histogram_data.o 00:03:36.434 CXX test/cpp_headers/idxd.o 00:03:36.434 CXX test/cpp_headers/init.o 00:03:36.434 CXX test/cpp_headers/idxd_spec.o 00:03:36.434 CXX test/cpp_headers/iscsi_spec.o 00:03:36.434 CXX test/cpp_headers/ioat_spec.o 00:03:36.434 CXX test/cpp_headers/ioat.o 00:03:36.434 CXX test/cpp_headers/json.o 00:03:36.434 CXX test/cpp_headers/keyring.o 00:03:36.434 CXX test/cpp_headers/jsonrpc.o 00:03:36.434 CXX test/cpp_headers/likely.o 00:03:36.434 CXX test/cpp_headers/log.o 00:03:36.434 CXX test/cpp_headers/keyring_module.o 00:03:36.434 CXX test/cpp_headers/nbd.o 00:03:36.434 CXX test/cpp_headers/lvol.o 00:03:36.434 CXX test/cpp_headers/notify.o 00:03:36.434 CXX test/cpp_headers/memory.o 00:03:36.434 CXX test/cpp_headers/nvme.o 00:03:36.434 CXX test/cpp_headers/mmio.o 00:03:36.434 CXX test/cpp_headers/net.o 00:03:36.434 CXX test/cpp_headers/nvme_intel.o 00:03:36.434 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.434 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.434 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.434 CXX test/cpp_headers/nvme_spec.o 00:03:36.434 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.434 CXX test/cpp_headers/nvme_zns.o 00:03:36.434 CXX test/cpp_headers/nvmf.o 00:03:36.434 CC test/thread/poller_perf/poller_perf.o 00:03:36.434 CXX test/cpp_headers/nvmf_spec.o 00:03:36.434 CXX test/cpp_headers/nvmf_transport.o 00:03:36.434 CXX test/cpp_headers/pipe.o 00:03:36.434 CXX test/cpp_headers/opal.o 00:03:36.434 CXX test/cpp_headers/opal_spec.o 00:03:36.434 CXX test/cpp_headers/queue.o 00:03:36.434 CXX test/cpp_headers/pci_ids.o 00:03:36.434 CC test/app/histogram_perf/histogram_perf.o 00:03:36.434 CXX test/cpp_headers/scheduler.o 00:03:36.434 LINK spdk_lspci 00:03:36.434 CXX test/cpp_headers/reduce.o 00:03:36.434 CXX test/cpp_headers/rpc.o 00:03:36.434 CC test/app/jsoncat/jsoncat.o 00:03:36.434 CXX test/cpp_headers/scsi.o 00:03:36.434 CXX test/cpp_headers/sock.o 00:03:36.434 CXX test/cpp_headers/scsi_spec.o 00:03:36.434 CXX test/cpp_headers/string.o 00:03:36.434 CXX test/cpp_headers/stdinc.o 00:03:36.434 CXX test/cpp_headers/thread.o 00:03:36.434 CXX test/cpp_headers/trace_parser.o 00:03:36.434 CC app/fio/nvme/fio_plugin.o 00:03:36.434 CXX test/cpp_headers/trace.o 00:03:36.434 CXX test/cpp_headers/tree.o 00:03:36.434 CXX test/cpp_headers/ublk.o 00:03:36.434 CC test/env/memory/memory_ut.o 00:03:36.434 CXX test/cpp_headers/util.o 00:03:36.434 CXX test/cpp_headers/version.o 00:03:36.434 CXX test/cpp_headers/uuid.o 00:03:36.434 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.434 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.434 LINK rpc_client_test 00:03:36.434 CXX test/cpp_headers/vmd.o 00:03:36.434 CXX test/cpp_headers/vhost.o 00:03:36.434 CC test/env/pci/pci_ut.o 00:03:36.434 CXX test/cpp_headers/xor.o 00:03:36.434 CXX test/cpp_headers/zipf.o 00:03:36.434 CC test/app/stub/stub.o 00:03:36.434 CC test/env/vtophys/vtophys.o 00:03:36.434 CC test/dma/test_dma/test_dma.o 00:03:36.434 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.696 LINK interrupt_tgt 00:03:36.696 LINK spdk_nvme_discover 00:03:36.696 CC app/fio/bdev/fio_plugin.o 00:03:36.696 CC test/app/bdev_svc/bdev_svc.o 00:03:36.696 LINK spdk_trace_record 00:03:36.696 LINK nvmf_tgt 00:03:36.696 LINK iscsi_tgt 00:03:36.955 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.955 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.955 LINK spdk_tgt 00:03:36.955 LINK jsoncat 00:03:36.955 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.955 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.955 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.955 LINK spdk_dd 00:03:36.955 LINK spdk_trace 00:03:36.955 LINK env_dpdk_post_init 00:03:36.955 LINK zipf 00:03:36.955 LINK verify 00:03:36.955 LINK histogram_perf 00:03:37.215 LINK ioat_perf 00:03:37.215 LINK vtophys 00:03:37.215 LINK poller_perf 00:03:37.215 LINK stub 00:03:37.215 LINK bdev_svc 00:03:37.215 LINK pci_ut 00:03:37.215 LINK test_dma 00:03:37.475 CC app/vhost/vhost.o 00:03:37.475 LINK spdk_nvme_perf 00:03:37.475 LINK vhost_fuzz 00:03:37.475 LINK mem_callbacks 00:03:37.475 LINK spdk_bdev 00:03:37.475 LINK nvme_fuzz 00:03:37.475 LINK spdk_nvme 00:03:37.475 LINK spdk_top 00:03:37.475 LINK spdk_nvme_identify 00:03:37.475 CC examples/sock/hello_world/hello_sock.o 00:03:37.475 CC examples/idxd/perf/perf.o 00:03:37.475 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.475 CC examples/vmd/led/led.o 00:03:37.475 LINK vhost 00:03:37.475 CC examples/thread/thread/thread_ex.o 00:03:37.475 CC test/event/event_perf/event_perf.o 00:03:37.736 CC test/event/reactor/reactor.o 00:03:37.736 CC test/event/reactor_perf/reactor_perf.o 00:03:37.736 CC test/event/scheduler/scheduler.o 00:03:37.736 CC test/event/app_repeat/app_repeat.o 00:03:37.736 LINK led 00:03:37.736 LINK lsvmd 00:03:37.736 LINK hello_sock 00:03:37.736 LINK event_perf 00:03:37.736 LINK reactor 00:03:37.736 LINK idxd_perf 00:03:37.736 LINK reactor_perf 00:03:37.736 LINK app_repeat 00:03:37.736 LINK thread 00:03:37.736 CC test/nvme/aer/aer.o 00:03:37.736 CC test/nvme/e2edp/nvme_dp.o 00:03:37.736 CC test/nvme/err_injection/err_injection.o 00:03:37.736 CC test/nvme/simple_copy/simple_copy.o 00:03:37.997 CC test/nvme/reset/reset.o 00:03:37.997 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.997 CC test/nvme/reserve/reserve.o 00:03:37.997 CC test/nvme/startup/startup.o 00:03:37.997 CC test/nvme/sgl/sgl.o 00:03:37.997 CC test/nvme/compliance/nvme_compliance.o 00:03:37.997 CC test/nvme/boot_partition/boot_partition.o 00:03:37.997 CC test/nvme/overhead/overhead.o 00:03:37.997 CC test/nvme/cuse/cuse.o 00:03:37.997 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.997 CC test/nvme/fdp/fdp.o 00:03:37.997 CC test/nvme/connect_stress/connect_stress.o 00:03:37.997 CC test/accel/dif/dif.o 00:03:37.997 CC test/blobfs/mkfs/mkfs.o 00:03:37.998 LINK scheduler 00:03:37.998 CC test/lvol/esnap/esnap.o 00:03:37.998 LINK simple_copy 00:03:37.998 LINK memory_ut 00:03:37.998 LINK boot_partition 00:03:37.998 LINK startup 00:03:37.998 LINK err_injection 00:03:37.998 LINK fused_ordering 00:03:37.998 LINK reserve 00:03:37.998 LINK doorbell_aers 00:03:37.998 LINK connect_stress 00:03:37.998 LINK mkfs 00:03:38.259 LINK sgl 00:03:38.259 LINK aer 00:03:38.259 LINK nvme_dp 00:03:38.259 LINK reset 00:03:38.259 LINK overhead 00:03:38.259 LINK nvme_compliance 00:03:38.259 LINK fdp 00:03:38.259 CC examples/nvme/arbitration/arbitration.o 00:03:38.259 CC examples/nvme/hotplug/hotplug.o 00:03:38.259 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.259 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.259 CC examples/nvme/reconnect/reconnect.o 00:03:38.259 CC examples/nvme/abort/abort.o 00:03:38.259 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:38.259 CC examples/nvme/hello_world/hello_world.o 00:03:38.259 LINK dif 00:03:38.259 CC examples/blob/hello_world/hello_blob.o 00:03:38.259 CC examples/accel/perf/accel_perf.o 00:03:38.259 LINK iscsi_fuzz 00:03:38.521 CC examples/blob/cli/blobcli.o 00:03:38.521 LINK pmr_persistence 00:03:38.521 LINK cmb_copy 00:03:38.521 LINK hello_world 00:03:38.521 LINK hotplug 00:03:38.521 LINK arbitration 00:03:38.521 LINK reconnect 00:03:38.521 LINK abort 00:03:38.521 LINK hello_blob 00:03:38.782 LINK nvme_manage 00:03:38.782 LINK accel_perf 00:03:38.782 LINK blobcli 00:03:38.783 CC test/bdev/bdevio/bdevio.o 00:03:39.056 LINK cuse 00:03:39.318 LINK bdevio 00:03:39.318 CC examples/bdev/hello_world/hello_bdev.o 00:03:39.318 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.578 LINK hello_bdev 00:03:40.152 LINK bdevperf 00:03:40.724 CC examples/nvmf/nvmf/nvmf.o 00:03:40.985 LINK nvmf 00:03:42.370 LINK esnap 00:03:42.631 00:03:42.631 real 0m54.044s 00:03:42.631 user 7m34.734s 00:03:42.631 sys 4m12.538s 00:03:42.631 16:43:02 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:42.631 16:43:02 make -- common/autotest_common.sh@10 -- $ set +x 00:03:42.631 ************************************ 00:03:42.631 END TEST make 00:03:42.631 ************************************ 00:03:42.631 16:43:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:42.631 16:43:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:42.631 16:43:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:42.631 16:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.631 16:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:42.631 16:43:02 -- pm/common@44 -- $ pid=1093104 00:03:42.631 16:43:02 -- pm/common@50 -- $ kill -TERM 1093104 00:03:42.631 16:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.631 16:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:42.631 16:43:02 -- pm/common@44 -- $ pid=1093105 00:03:42.631 16:43:02 -- pm/common@50 -- $ kill -TERM 1093105 00:03:42.631 16:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.631 16:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:42.631 16:43:02 -- pm/common@44 -- $ pid=1093107 00:03:42.631 16:43:02 -- pm/common@50 -- $ kill -TERM 1093107 00:03:42.631 16:43:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.631 16:43:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:42.631 16:43:02 -- pm/common@44 -- $ pid=1093124 00:03:42.631 16:43:02 -- pm/common@50 -- $ sudo -E kill -TERM 1093124 00:03:42.893 16:43:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.893 16:43:02 -- nvmf/common.sh@7 -- # uname -s 00:03:42.893 16:43:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.893 16:43:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.893 16:43:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.893 16:43:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.893 16:43:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.893 16:43:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.893 16:43:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.893 16:43:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.893 16:43:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.893 16:43:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.893 16:43:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:42.893 16:43:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:42.893 16:43:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.893 16:43:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.893 16:43:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:42.893 16:43:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.893 16:43:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.893 16:43:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.893 16:43:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.893 16:43:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.893 16:43:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.893 16:43:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.893 16:43:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.893 16:43:02 -- paths/export.sh@5 -- # export PATH 00:03:42.893 16:43:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.893 16:43:02 -- nvmf/common.sh@47 -- # : 0 00:03:42.893 16:43:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:42.893 16:43:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:42.893 16:43:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.893 16:43:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.893 16:43:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.893 16:43:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:42.893 16:43:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:42.893 16:43:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:42.893 16:43:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.893 16:43:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.893 16:43:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.893 16:43:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.893 16:43:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.893 16:43:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.893 16:43:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.893 16:43:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.893 16:43:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.893 16:43:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.893 16:43:02 -- spdk/autotest.sh@48 -- # udevadm_pid=1158227 00:03:42.893 16:43:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:42.893 16:43:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.893 16:43:02 -- pm/common@17 -- # local monitor 00:03:42.893 16:43:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.893 16:43:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.893 16:43:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.893 16:43:02 -- pm/common@21 -- # date +%s 00:03:42.893 16:43:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.893 16:43:02 -- pm/common@21 -- # date +%s 00:03:42.893 16:43:02 -- pm/common@25 -- # sleep 1 00:03:42.893 16:43:02 -- pm/common@21 -- # date +%s 00:03:42.893 16:43:02 -- pm/common@21 -- # date +%s 00:03:42.893 16:43:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721918582 00:03:42.893 16:43:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721918582 00:03:42.893 16:43:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721918582 00:03:42.893 16:43:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721918582 00:03:42.893 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721918582_collect-vmstat.pm.log 00:03:42.893 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721918582_collect-cpu-load.pm.log 00:03:42.893 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721918582_collect-cpu-temp.pm.log 00:03:42.893 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721918582_collect-bmc-pm.bmc.pm.log 00:03:43.837 16:43:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:43.837 16:43:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:43.837 16:43:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:43.837 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:03:43.837 16:43:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:43.837 16:43:03 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:43.837 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:03:43.837 16:43:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:43.837 16:43:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.837 16:43:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.837 16:43:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:43.837 16:43:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.837 16:43:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:43.837 16:43:04 -- common/autotest_common.sh@1455 -- # uname 00:03:43.837 16:43:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:43.837 16:43:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:43.837 16:43:04 -- common/autotest_common.sh@1475 -- # uname 00:03:43.837 16:43:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:43.837 16:43:04 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:43.837 16:43:04 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:43.837 16:43:04 -- spdk/autotest.sh@72 -- # hash lcov 00:03:43.837 16:43:04 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:43.837 16:43:04 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:43.837 --rc lcov_branch_coverage=1 00:03:43.837 --rc lcov_function_coverage=1 00:03:43.837 --rc genhtml_branch_coverage=1 00:03:43.837 --rc genhtml_function_coverage=1 00:03:43.837 --rc genhtml_legend=1 00:03:43.837 --rc geninfo_all_blocks=1 00:03:43.837 ' 00:03:43.837 16:43:04 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:43.837 --rc lcov_branch_coverage=1 00:03:43.837 --rc lcov_function_coverage=1 00:03:43.837 --rc genhtml_branch_coverage=1 00:03:43.837 --rc genhtml_function_coverage=1 00:03:43.837 --rc genhtml_legend=1 00:03:43.837 --rc geninfo_all_blocks=1 00:03:43.837 ' 00:03:43.837 16:43:04 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:43.837 --rc lcov_branch_coverage=1 00:03:43.837 --rc lcov_function_coverage=1 00:03:43.837 --rc genhtml_branch_coverage=1 00:03:43.837 --rc genhtml_function_coverage=1 00:03:43.837 --rc genhtml_legend=1 00:03:43.837 --rc geninfo_all_blocks=1 00:03:43.837 --no-external' 00:03:43.837 16:43:04 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:43.837 --rc lcov_branch_coverage=1 00:03:43.837 --rc lcov_function_coverage=1 00:03:43.837 --rc genhtml_branch_coverage=1 00:03:43.837 --rc genhtml_function_coverage=1 00:03:43.837 --rc genhtml_legend=1 00:03:43.837 --rc geninfo_all_blocks=1 00:03:43.837 --no-external' 00:03:43.837 16:43:04 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:44.098 lcov: LCOV version 1.14 00:03:44.098 16:43:04 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:45.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:45.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:45.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:45.751 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:46.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:46.013 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:46.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:46.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:46.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:46.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:46.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:46.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:46.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:46.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:46.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:46.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:46.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:46.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:46.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:46.275 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:58.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:58.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:13.518 16:43:33 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:13.518 16:43:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.518 16:43:33 -- common/autotest_common.sh@10 -- # set +x 00:04:13.518 16:43:33 -- spdk/autotest.sh@91 -- # rm -f 00:04:13.518 16:43:33 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.822 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:16.822 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:16.823 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:16.823 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:16.823 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:17.084 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:17.084 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:17.084 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:17.084 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:17.084 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:17.346 16:43:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:17.346 16:43:37 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:17.346 16:43:37 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:17.346 16:43:37 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:17.346 16:43:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.346 16:43:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:17.346 16:43:37 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:17.346 16:43:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.346 16:43:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.346 16:43:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:17.346 16:43:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.346 16:43:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:17.346 16:43:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:17.346 16:43:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:17.346 16:43:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:17.346 No valid GPT data, bailing 00:04:17.346 16:43:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.346 16:43:37 -- scripts/common.sh@391 -- # pt= 00:04:17.346 16:43:37 -- scripts/common.sh@392 -- # return 1 00:04:17.346 16:43:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:17.346 1+0 records in 00:04:17.346 1+0 records out 00:04:17.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464184 s, 226 MB/s 00:04:17.346 16:43:37 -- spdk/autotest.sh@118 -- # sync 00:04:17.346 16:43:37 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.346 16:43:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.346 16:43:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:27.354 16:43:45 -- spdk/autotest.sh@124 -- # uname -s 00:04:27.354 16:43:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:27.354 16:43:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:27.354 16:43:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.354 16:43:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.354 16:43:45 -- common/autotest_common.sh@10 -- # set +x 00:04:27.354 ************************************ 00:04:27.354 START TEST setup.sh 00:04:27.354 ************************************ 00:04:27.354 16:43:45 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:27.354 * Looking for test storage... 00:04:27.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.354 16:43:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:27.354 16:43:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:27.354 16:43:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:27.354 16:43:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.354 16:43:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.354 16:43:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.354 ************************************ 00:04:27.354 START TEST acl 00:04:27.354 ************************************ 00:04:27.354 16:43:45 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:27.354 * Looking for test storage... 00:04:27.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.354 16:43:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.354 16:43:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.354 16:43:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:27.354 16:43:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:27.354 16:43:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:27.354 16:43:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:27.354 16:43:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:27.354 16:43:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.354 16:43:46 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.904 16:43:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:29.904 16:43:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:29.904 16:43:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.904 16:43:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:29.904 16:43:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.904 16:43:49 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:33.211 Hugepages 00:04:33.211 node hugesize free / total 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 00:04:33.211 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.211 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:33.212 16:43:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:33.212 16:43:53 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.212 16:43:53 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.212 16:43:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.212 ************************************ 00:04:33.212 START TEST denied 00:04:33.212 ************************************ 00:04:33.212 16:43:53 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:33.212 16:43:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:33.212 16:43:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:33.212 16:43:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:33.212 16:43:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.212 16:43:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.433 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.433 16:43:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.647 00:04:41.647 real 0m8.743s 00:04:41.647 user 0m2.879s 00:04:41.647 sys 0m5.151s 00:04:41.647 16:44:01 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.647 16:44:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:41.647 ************************************ 00:04:41.647 END TEST denied 00:04:41.647 ************************************ 00:04:41.647 16:44:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:41.647 16:44:01 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.647 16:44:01 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.647 16:44:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.647 ************************************ 00:04:41.647 START TEST allowed 00:04:41.647 ************************************ 00:04:41.647 16:44:01 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:41.647 16:44:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:41.647 16:44:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:41.647 16:44:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:41.647 16:44:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.647 16:44:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.021 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:47.021 16:44:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:47.021 16:44:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:47.021 16:44:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:47.021 16:44:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.021 16:44:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.329 00:04:50.329 real 0m8.352s 00:04:50.329 user 0m2.090s 00:04:50.329 sys 0m4.382s 00:04:50.329 16:44:10 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.329 16:44:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:50.329 ************************************ 00:04:50.329 END TEST allowed 00:04:50.329 ************************************ 00:04:50.329 00:04:50.329 real 0m24.324s 00:04:50.329 user 0m7.498s 00:04:50.329 sys 0m14.265s 00:04:50.329 16:44:10 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.329 16:44:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:50.329 ************************************ 00:04:50.329 END TEST acl 00:04:50.329 ************************************ 00:04:50.329 16:44:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:50.329 16:44:10 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.329 16:44:10 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.329 16:44:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.329 ************************************ 00:04:50.329 START TEST hugepages 00:04:50.329 ************************************ 00:04:50.329 16:44:10 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:50.329 * Looking for test storage... 00:04:50.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.329 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102437816 kB' 'MemAvailable: 106155516 kB' 'Buffers: 2704 kB' 'Cached: 14768600 kB' 'SwapCached: 0 kB' 'Active: 11615252 kB' 'Inactive: 3693560 kB' 'Active(anon): 11135452 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540924 kB' 'Mapped: 221088 kB' 'Shmem: 10597944 kB' 'KReclaimable: 585284 kB' 'Slab: 1471144 kB' 'SReclaimable: 585284 kB' 'SUnreclaim: 885860 kB' 'KernelStack: 27280 kB' 'PageTables: 9496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12713520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.330 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.331 16:44:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:50.331 16:44:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.331 16:44:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.331 16:44:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.331 ************************************ 00:04:50.331 START TEST default_setup 00:04:50.331 ************************************ 00:04:50.331 16:44:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.332 16:44:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.638 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:53.638 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104625548 kB' 'MemAvailable: 108343240 kB' 'Buffers: 2704 kB' 'Cached: 14768724 kB' 'SwapCached: 0 kB' 'Active: 11632116 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152316 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557536 kB' 'Mapped: 221444 kB' 'Shmem: 10598068 kB' 'KReclaimable: 585276 kB' 'Slab: 1468616 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883340 kB' 'KernelStack: 27312 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12731116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.905 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.906 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104626020 kB' 'MemAvailable: 108343712 kB' 'Buffers: 2704 kB' 'Cached: 14768728 kB' 'SwapCached: 0 kB' 'Active: 11631788 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151988 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557184 kB' 'Mapped: 221320 kB' 'Shmem: 10598072 kB' 'KReclaimable: 585276 kB' 'Slab: 1468580 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883304 kB' 'KernelStack: 27264 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12729520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.907 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.908 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104625964 kB' 'MemAvailable: 108343656 kB' 'Buffers: 2704 kB' 'Cached: 14768744 kB' 'SwapCached: 0 kB' 'Active: 11632104 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152304 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557404 kB' 'Mapped: 221320 kB' 'Shmem: 10598088 kB' 'KReclaimable: 585276 kB' 'Slab: 1468580 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883304 kB' 'KernelStack: 27200 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12731156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.909 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.910 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.911 nr_hugepages=1024 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.911 resv_hugepages=0 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.911 surplus_hugepages=0 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.911 anon_hugepages=0 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104624224 kB' 'MemAvailable: 108341916 kB' 'Buffers: 2704 kB' 'Cached: 14768768 kB' 'SwapCached: 0 kB' 'Active: 11631984 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152184 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557312 kB' 'Mapped: 221320 kB' 'Shmem: 10598112 kB' 'KReclaimable: 585276 kB' 'Slab: 1468580 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883304 kB' 'KernelStack: 27216 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12729564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.911 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.912 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57674720 kB' 'MemUsed: 7984288 kB' 'SwapCached: 0 kB' 'Active: 2974604 kB' 'Inactive: 237284 kB' 'Active(anon): 2735180 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2924780 kB' 'Mapped: 92504 kB' 'AnonPages: 290288 kB' 'Shmem: 2448072 kB' 'KernelStack: 15848 kB' 'PageTables: 5868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 792896 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 517284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.913 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:53.914 node0=1024 expecting 1024 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.914 00:04:53.914 real 0m3.586s 00:04:53.914 user 0m1.249s 00:04:53.914 sys 0m2.332s 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.914 16:44:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:53.914 ************************************ 00:04:53.914 END TEST default_setup 00:04:53.914 ************************************ 00:04:54.176 16:44:14 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:54.176 16:44:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.176 16:44:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.176 16:44:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.176 ************************************ 00:04:54.176 START TEST per_node_1G_alloc 00:04:54.176 ************************************ 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.176 16:44:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.485 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:57.485 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.485 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.753 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104644408 kB' 'MemAvailable: 108362100 kB' 'Buffers: 2704 kB' 'Cached: 14768880 kB' 'SwapCached: 0 kB' 'Active: 11631788 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151988 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557132 kB' 'Mapped: 220788 kB' 'Shmem: 10598224 kB' 'KReclaimable: 585276 kB' 'Slab: 1468628 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883352 kB' 'KernelStack: 27168 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12719884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.754 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104640876 kB' 'MemAvailable: 108358568 kB' 'Buffers: 2704 kB' 'Cached: 14768880 kB' 'SwapCached: 0 kB' 'Active: 11635300 kB' 'Inactive: 3693560 kB' 'Active(anon): 11155500 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561156 kB' 'Mapped: 220728 kB' 'Shmem: 10598224 kB' 'KReclaimable: 585276 kB' 'Slab: 1468608 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883332 kB' 'KernelStack: 27136 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12723744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.755 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.756 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104645504 kB' 'MemAvailable: 108363196 kB' 'Buffers: 2704 kB' 'Cached: 14768896 kB' 'SwapCached: 0 kB' 'Active: 11629632 kB' 'Inactive: 3693560 kB' 'Active(anon): 11149832 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554956 kB' 'Mapped: 220504 kB' 'Shmem: 10598240 kB' 'KReclaimable: 585276 kB' 'Slab: 1468620 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883344 kB' 'KernelStack: 27120 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12717648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:57.757 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.759 nr_hugepages=1024 00:04:57.759 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.760 resv_hugepages=0 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.760 surplus_hugepages=0 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.760 anon_hugepages=0 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104645472 kB' 'MemAvailable: 108363164 kB' 'Buffers: 2704 kB' 'Cached: 14768924 kB' 'SwapCached: 0 kB' 'Active: 11629908 kB' 'Inactive: 3693560 kB' 'Active(anon): 11150108 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555224 kB' 'Mapped: 220224 kB' 'Shmem: 10598268 kB' 'KReclaimable: 585276 kB' 'Slab: 1468620 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883344 kB' 'KernelStack: 27120 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12717668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.761 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58728564 kB' 'MemUsed: 6930444 kB' 'SwapCached: 0 kB' 'Active: 2973676 kB' 'Inactive: 237284 kB' 'Active(anon): 2734252 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2924924 kB' 'Mapped: 91556 kB' 'AnonPages: 289276 kB' 'Shmem: 2448216 kB' 'KernelStack: 15816 kB' 'PageTables: 5736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 793084 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 517472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.762 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45916880 kB' 'MemUsed: 14762956 kB' 'SwapCached: 0 kB' 'Active: 8655916 kB' 'Inactive: 3456276 kB' 'Active(anon): 8415540 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11846744 kB' 'Mapped: 128668 kB' 'AnonPages: 265556 kB' 'Shmem: 8150092 kB' 'KernelStack: 11288 kB' 'PageTables: 2976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 309664 kB' 'Slab: 675536 kB' 'SReclaimable: 309664 kB' 'SUnreclaim: 365872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.763 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:57.765 node0=512 expecting 512 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:57.765 node1=512 expecting 512 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:57.765 00:04:57.765 real 0m3.724s 00:04:57.765 user 0m1.528s 00:04:57.765 sys 0m2.261s 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.765 16:44:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.765 ************************************ 00:04:57.765 END TEST per_node_1G_alloc 00:04:57.765 ************************************ 00:04:57.765 16:44:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:57.765 16:44:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.765 16:44:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.765 16:44:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.027 ************************************ 00:04:58.027 START TEST even_2G_alloc 00:04:58.027 ************************************ 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.027 16:44:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.336 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:01.336 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104670724 kB' 'MemAvailable: 108388416 kB' 'Buffers: 2704 kB' 'Cached: 14769064 kB' 'SwapCached: 0 kB' 'Active: 11631340 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151540 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556004 kB' 'Mapped: 220380 kB' 'Shmem: 10598408 kB' 'KReclaimable: 585276 kB' 'Slab: 1468476 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883200 kB' 'KernelStack: 27120 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12718432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.336 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.337 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104671292 kB' 'MemAvailable: 108388984 kB' 'Buffers: 2704 kB' 'Cached: 14769068 kB' 'SwapCached: 0 kB' 'Active: 11631248 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151448 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555948 kB' 'Mapped: 220324 kB' 'Shmem: 10598412 kB' 'KReclaimable: 585276 kB' 'Slab: 1468476 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883200 kB' 'KernelStack: 27120 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12718452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.338 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.605 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.606 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104671840 kB' 'MemAvailable: 108389532 kB' 'Buffers: 2704 kB' 'Cached: 14769084 kB' 'SwapCached: 0 kB' 'Active: 11630768 kB' 'Inactive: 3693560 kB' 'Active(anon): 11150968 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555928 kB' 'Mapped: 220248 kB' 'Shmem: 10598428 kB' 'KReclaimable: 585276 kB' 'Slab: 1468484 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883208 kB' 'KernelStack: 27120 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12718472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.607 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.608 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.609 nr_hugepages=1024 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.609 resv_hugepages=0 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.609 surplus_hugepages=0 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.609 anon_hugepages=0 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104671168 kB' 'MemAvailable: 108388860 kB' 'Buffers: 2704 kB' 'Cached: 14769108 kB' 'SwapCached: 0 kB' 'Active: 11631088 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151288 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556360 kB' 'Mapped: 220472 kB' 'Shmem: 10598452 kB' 'KReclaimable: 585276 kB' 'Slab: 1468476 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883200 kB' 'KernelStack: 27152 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12718496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.609 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.610 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.611 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58760284 kB' 'MemUsed: 6898724 kB' 'SwapCached: 0 kB' 'Active: 2973696 kB' 'Inactive: 237284 kB' 'Active(anon): 2734272 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2925068 kB' 'Mapped: 91556 kB' 'AnonPages: 289168 kB' 'Shmem: 2448360 kB' 'KernelStack: 15800 kB' 'PageTables: 5688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 792940 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 517328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.612 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45911928 kB' 'MemUsed: 14767908 kB' 'SwapCached: 0 kB' 'Active: 8656708 kB' 'Inactive: 3456276 kB' 'Active(anon): 8416332 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11846784 kB' 'Mapped: 128692 kB' 'AnonPages: 266352 kB' 'Shmem: 8150132 kB' 'KernelStack: 11304 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 309664 kB' 'Slab: 675536 kB' 'SReclaimable: 309664 kB' 'SUnreclaim: 365872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.613 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.614 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.615 node0=512 expecting 512 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:01.615 node1=512 expecting 512 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:01.615 00:05:01.615 real 0m3.731s 00:05:01.615 user 0m1.472s 00:05:01.615 sys 0m2.308s 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.615 16:44:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.615 ************************************ 00:05:01.615 END TEST even_2G_alloc 00:05:01.615 ************************************ 00:05:01.615 16:44:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:01.615 16:44:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.615 16:44:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.615 16:44:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.615 ************************************ 00:05:01.615 START TEST odd_alloc 00:05:01.615 ************************************ 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.615 16:44:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.931 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:04.931 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104688344 kB' 'MemAvailable: 108406036 kB' 'Buffers: 2704 kB' 'Cached: 14769240 kB' 'SwapCached: 0 kB' 'Active: 11631892 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152092 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556800 kB' 'Mapped: 220320 kB' 'Shmem: 10598584 kB' 'KReclaimable: 585276 kB' 'Slab: 1467884 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 882608 kB' 'KernelStack: 27104 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12719384 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235928 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.931 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.932 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104689852 kB' 'MemAvailable: 108407544 kB' 'Buffers: 2704 kB' 'Cached: 14769248 kB' 'SwapCached: 0 kB' 'Active: 11632072 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152272 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557008 kB' 'Mapped: 220264 kB' 'Shmem: 10598592 kB' 'KReclaimable: 585276 kB' 'Slab: 1467860 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 882584 kB' 'KernelStack: 27136 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12719404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235896 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.933 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104691136 kB' 'MemAvailable: 108408828 kB' 'Buffers: 2704 kB' 'Cached: 14769260 kB' 'SwapCached: 0 kB' 'Active: 11631752 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151952 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556644 kB' 'Mapped: 220264 kB' 'Shmem: 10598604 kB' 'KReclaimable: 585276 kB' 'Slab: 1467952 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 882676 kB' 'KernelStack: 27136 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12719424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235896 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.934 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.935 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:04.936 nr_hugepages=1025 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.936 resv_hugepages=0 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.936 surplus_hugepages=0 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.936 anon_hugepages=0 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.936 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104689624 kB' 'MemAvailable: 108407316 kB' 'Buffers: 2704 kB' 'Cached: 14769260 kB' 'SwapCached: 0 kB' 'Active: 11631752 kB' 'Inactive: 3693560 kB' 'Active(anon): 11151952 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556644 kB' 'Mapped: 220264 kB' 'Shmem: 10598604 kB' 'KReclaimable: 585276 kB' 'Slab: 1467952 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 882676 kB' 'KernelStack: 27136 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12719444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235896 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.937 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.938 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58758596 kB' 'MemUsed: 6900412 kB' 'SwapCached: 0 kB' 'Active: 2974324 kB' 'Inactive: 237284 kB' 'Active(anon): 2734900 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2925188 kB' 'Mapped: 91556 kB' 'AnonPages: 289616 kB' 'Shmem: 2448480 kB' 'KernelStack: 15848 kB' 'PageTables: 5740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 792456 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 516844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.939 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45931744 kB' 'MemUsed: 14748092 kB' 'SwapCached: 0 kB' 'Active: 8657480 kB' 'Inactive: 3456276 kB' 'Active(anon): 8417104 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11846820 kB' 'Mapped: 128708 kB' 'AnonPages: 267024 kB' 'Shmem: 8150168 kB' 'KernelStack: 11288 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 309664 kB' 'Slab: 675496 kB' 'SReclaimable: 309664 kB' 'SUnreclaim: 365832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.940 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.941 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:04.942 node0=512 expecting 513 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:04.942 node1=513 expecting 512 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:04.942 00:05:04.942 real 0m3.292s 00:05:04.942 user 0m1.157s 00:05:04.942 sys 0m2.157s 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.942 16:44:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.942 ************************************ 00:05:04.942 END TEST odd_alloc 00:05:04.942 ************************************ 00:05:04.942 16:44:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:04.942 16:44:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.942 16:44:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.942 16:44:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.204 ************************************ 00:05:05.204 START TEST custom_alloc 00:05:05.204 ************************************ 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.204 16:44:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.512 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:08.512 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:08.512 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.778 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103641824 kB' 'MemAvailable: 107359516 kB' 'Buffers: 2704 kB' 'Cached: 14769416 kB' 'SwapCached: 0 kB' 'Active: 11632936 kB' 'Inactive: 3693560 kB' 'Active(anon): 11153136 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557568 kB' 'Mapped: 220280 kB' 'Shmem: 10598760 kB' 'KReclaimable: 585276 kB' 'Slab: 1468440 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883164 kB' 'KernelStack: 27168 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12720492 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.779 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103642708 kB' 'MemAvailable: 107360400 kB' 'Buffers: 2704 kB' 'Cached: 14769420 kB' 'SwapCached: 0 kB' 'Active: 11632424 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152624 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557084 kB' 'Mapped: 220224 kB' 'Shmem: 10598764 kB' 'KReclaimable: 585276 kB' 'Slab: 1468440 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883164 kB' 'KernelStack: 27136 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12720512 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235928 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.780 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.781 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103643332 kB' 'MemAvailable: 107361024 kB' 'Buffers: 2704 kB' 'Cached: 14769432 kB' 'SwapCached: 0 kB' 'Active: 11632492 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152692 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557140 kB' 'Mapped: 220284 kB' 'Shmem: 10598776 kB' 'KReclaimable: 585276 kB' 'Slab: 1468476 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883200 kB' 'KernelStack: 27120 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12720532 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235928 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.782 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.783 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:08.784 nr_hugepages=1536 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.784 resv_hugepages=0 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.784 surplus_hugepages=0 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.784 anon_hugepages=0 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103643324 kB' 'MemAvailable: 107361016 kB' 'Buffers: 2704 kB' 'Cached: 14769460 kB' 'SwapCached: 0 kB' 'Active: 11632512 kB' 'Inactive: 3693560 kB' 'Active(anon): 11152712 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557116 kB' 'Mapped: 220284 kB' 'Shmem: 10598804 kB' 'KReclaimable: 585276 kB' 'Slab: 1468476 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883200 kB' 'KernelStack: 27120 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12720552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235928 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.784 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.785 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58758380 kB' 'MemUsed: 6900628 kB' 'SwapCached: 0 kB' 'Active: 2975140 kB' 'Inactive: 237284 kB' 'Active(anon): 2735716 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2925248 kB' 'Mapped: 91556 kB' 'AnonPages: 290388 kB' 'Shmem: 2448540 kB' 'KernelStack: 15864 kB' 'PageTables: 5800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 792756 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 517144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.786 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:08.787 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.788 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.788 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.788 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 44885028 kB' 'MemUsed: 15794808 kB' 'SwapCached: 0 kB' 'Active: 8657968 kB' 'Inactive: 3456276 kB' 'Active(anon): 8417592 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11846956 kB' 'Mapped: 128740 kB' 'AnonPages: 267336 kB' 'Shmem: 8150304 kB' 'KernelStack: 11272 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 309664 kB' 'Slab: 675720 kB' 'SReclaimable: 309664 kB' 'SUnreclaim: 366056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.788 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.788 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.788 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.051 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.052 node0=512 expecting 512 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:09.052 node1=1024 expecting 1024 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:09.052 00:05:09.052 real 0m3.849s 00:05:09.052 user 0m1.572s 00:05:09.052 sys 0m2.302s 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.052 16:44:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.052 ************************************ 00:05:09.052 END TEST custom_alloc 00:05:09.052 ************************************ 00:05:09.052 16:44:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:09.052 16:44:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.052 16:44:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.052 16:44:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.052 ************************************ 00:05:09.052 START TEST no_shrink_alloc 00:05:09.052 ************************************ 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.052 16:44:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.358 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:12.358 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:12.358 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104657936 kB' 'MemAvailable: 108375628 kB' 'Buffers: 2704 kB' 'Cached: 14769592 kB' 'SwapCached: 0 kB' 'Active: 11634392 kB' 'Inactive: 3693560 kB' 'Active(anon): 11154592 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558960 kB' 'Mapped: 220376 kB' 'Shmem: 10598936 kB' 'KReclaimable: 585276 kB' 'Slab: 1468480 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883204 kB' 'KernelStack: 27312 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12724400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.622 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.623 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104660380 kB' 'MemAvailable: 108378072 kB' 'Buffers: 2704 kB' 'Cached: 14769592 kB' 'SwapCached: 0 kB' 'Active: 11634236 kB' 'Inactive: 3693560 kB' 'Active(anon): 11154436 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558896 kB' 'Mapped: 220340 kB' 'Shmem: 10598936 kB' 'KReclaimable: 585276 kB' 'Slab: 1468472 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883196 kB' 'KernelStack: 27216 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12724416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235928 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.624 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.625 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104659904 kB' 'MemAvailable: 108377596 kB' 'Buffers: 2704 kB' 'Cached: 14769612 kB' 'SwapCached: 0 kB' 'Active: 11634612 kB' 'Inactive: 3693560 kB' 'Active(anon): 11154812 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559152 kB' 'Mapped: 220340 kB' 'Shmem: 10598956 kB' 'KReclaimable: 585276 kB' 'Slab: 1468448 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883172 kB' 'KernelStack: 27120 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12763332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.626 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.892 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.893 nr_hugepages=1024 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.893 resv_hugepages=0 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.893 surplus_hugepages=0 00:05:12.893 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.893 anon_hugepages=0 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104658972 kB' 'MemAvailable: 108376664 kB' 'Buffers: 2704 kB' 'Cached: 14769632 kB' 'SwapCached: 0 kB' 'Active: 11634860 kB' 'Inactive: 3693560 kB' 'Active(anon): 11155060 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559544 kB' 'Mapped: 220452 kB' 'Shmem: 10598976 kB' 'KReclaimable: 585276 kB' 'Slab: 1468448 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883172 kB' 'KernelStack: 27296 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12724092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.894 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.895 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57682736 kB' 'MemUsed: 7976272 kB' 'SwapCached: 0 kB' 'Active: 2978292 kB' 'Inactive: 237284 kB' 'Active(anon): 2738868 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2925304 kB' 'Mapped: 91576 kB' 'AnonPages: 293436 kB' 'Shmem: 2448596 kB' 'KernelStack: 15960 kB' 'PageTables: 6132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 792724 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 517112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.896 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.897 node0=1024 expecting 1024 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.897 16:44:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.204 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:16.204 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:16.204 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:16.471 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104661312 kB' 'MemAvailable: 108379004 kB' 'Buffers: 2704 kB' 'Cached: 14769748 kB' 'SwapCached: 0 kB' 'Active: 11637072 kB' 'Inactive: 3693560 kB' 'Active(anon): 11157272 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561484 kB' 'Mapped: 220356 kB' 'Shmem: 10599092 kB' 'KReclaimable: 585276 kB' 'Slab: 1469208 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883932 kB' 'KernelStack: 27344 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12746752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236200 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.471 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.472 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104665376 kB' 'MemAvailable: 108383068 kB' 'Buffers: 2704 kB' 'Cached: 14769752 kB' 'SwapCached: 0 kB' 'Active: 11635336 kB' 'Inactive: 3693560 kB' 'Active(anon): 11155536 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559740 kB' 'Mapped: 220356 kB' 'Shmem: 10599096 kB' 'KReclaimable: 585276 kB' 'Slab: 1469168 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883892 kB' 'KernelStack: 27152 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12725068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.473 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.474 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104664732 kB' 'MemAvailable: 108382424 kB' 'Buffers: 2704 kB' 'Cached: 14769772 kB' 'SwapCached: 0 kB' 'Active: 11634884 kB' 'Inactive: 3693560 kB' 'Active(anon): 11155084 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559176 kB' 'Mapped: 220356 kB' 'Shmem: 10599116 kB' 'KReclaimable: 585276 kB' 'Slab: 1469252 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883976 kB' 'KernelStack: 27280 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12723488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.475 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.476 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.477 nr_hugepages=1024 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.477 resv_hugepages=0 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.477 surplus_hugepages=0 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.477 anon_hugepages=0 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104668176 kB' 'MemAvailable: 108385868 kB' 'Buffers: 2704 kB' 'Cached: 14769792 kB' 'SwapCached: 0 kB' 'Active: 11635744 kB' 'Inactive: 3693560 kB' 'Active(anon): 11155944 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559632 kB' 'Mapped: 220356 kB' 'Shmem: 10599136 kB' 'KReclaimable: 585276 kB' 'Slab: 1469252 kB' 'SReclaimable: 585276 kB' 'SUnreclaim: 883976 kB' 'KernelStack: 27232 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12723632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236008 kB' 'VmallocChunk: 0 kB' 'Percpu: 157824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4681076 kB' 'DirectMap2M: 29601792 kB' 'DirectMap1G: 101711872 kB' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.477 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.478 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57697388 kB' 'MemUsed: 7961620 kB' 'SwapCached: 0 kB' 'Active: 2977504 kB' 'Inactive: 237284 kB' 'Active(anon): 2738080 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2925340 kB' 'Mapped: 91576 kB' 'AnonPages: 292548 kB' 'Shmem: 2448632 kB' 'KernelStack: 15800 kB' 'PageTables: 5564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275612 kB' 'Slab: 793000 kB' 'SReclaimable: 275612 kB' 'SUnreclaim: 517388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.479 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.480 node0=1024 expecting 1024 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.480 00:05:16.480 real 0m7.565s 00:05:16.480 user 0m2.983s 00:05:16.480 sys 0m4.685s 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.480 16:44:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.480 ************************************ 00:05:16.480 END TEST no_shrink_alloc 00:05:16.480 ************************************ 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:16.741 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.742 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.742 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.742 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.742 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:16.742 16:44:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:16.742 00:05:16.742 real 0m26.393s 00:05:16.742 user 0m10.219s 00:05:16.742 sys 0m16.467s 00:05:16.742 16:44:36 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.742 16:44:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.742 ************************************ 00:05:16.742 END TEST hugepages 00:05:16.742 ************************************ 00:05:16.742 16:44:36 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:16.742 16:44:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.742 16:44:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.742 16:44:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.742 ************************************ 00:05:16.742 START TEST driver 00:05:16.742 ************************************ 00:05:16.742 16:44:36 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:16.742 * Looking for test storage... 00:05:16.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:16.742 16:44:36 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:16.742 16:44:36 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.742 16:44:36 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.034 16:44:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:22.034 16:44:41 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.034 16:44:41 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.034 16:44:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.034 ************************************ 00:05:22.034 START TEST guess_driver 00:05:22.034 ************************************ 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:22.034 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:22.034 Looking for driver=vfio-pci 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.034 16:44:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.432 16:44:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.432 16:44:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.727 00:05:30.727 real 0m8.669s 00:05:30.727 user 0m2.785s 00:05:30.727 sys 0m5.105s 00:05:30.727 16:44:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.727 16:44:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:30.727 ************************************ 00:05:30.727 END TEST guess_driver 00:05:30.727 ************************************ 00:05:30.727 00:05:30.727 real 0m13.591s 00:05:30.727 user 0m4.157s 00:05:30.727 sys 0m7.835s 00:05:30.727 16:44:50 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.727 16:44:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:30.727 ************************************ 00:05:30.727 END TEST driver 00:05:30.727 ************************************ 00:05:30.727 16:44:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:30.727 16:44:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.727 16:44:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.727 16:44:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:30.727 ************************************ 00:05:30.727 START TEST devices 00:05:30.727 ************************************ 00:05:30.727 16:44:50 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:30.727 * Looking for test storage... 00:05:30.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:30.727 16:44:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:30.727 16:44:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:30.728 16:44:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.728 16:44:50 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:34.936 16:44:54 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:34.936 No valid GPT data, bailing 00:05:34.936 16:44:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.936 16:44:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:34.936 16:44:54 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:34.936 16:44:54 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.936 16:44:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:34.936 ************************************ 00:05:34.936 START TEST nvme_mount 00:05:34.936 ************************************ 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:34.936 16:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:35.509 Creating new GPT entries in memory. 00:05:35.509 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:35.509 other utilities. 00:05:35.509 16:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:35.509 16:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.509 16:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.509 16:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.509 16:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:36.452 Creating new GPT entries in memory. 00:05:36.453 The operation has completed successfully. 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1197905 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.453 16:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.760 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:39.761 16:44:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:40.022 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:40.022 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:40.292 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:40.292 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:40.292 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:40.292 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:40.292 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:40.292 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:40.292 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.292 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:40.292 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:40.292 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.553 16:45:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:43.864 16:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.126 16:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.127 16:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.435 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:47.697 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:47.697 00:05:47.697 real 0m13.434s 00:05:47.697 user 0m4.205s 00:05:47.697 sys 0m7.094s 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.697 16:45:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:47.697 ************************************ 00:05:47.697 END TEST nvme_mount 00:05:47.697 ************************************ 00:05:48.024 16:45:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:48.024 16:45:08 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.024 16:45:08 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.024 16:45:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:48.024 ************************************ 00:05:48.024 START TEST dm_mount 00:05:48.024 ************************************ 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:48.024 16:45:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:48.969 Creating new GPT entries in memory. 00:05:48.969 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:48.969 other utilities. 00:05:48.969 16:45:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:48.969 16:45:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.969 16:45:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:48.969 16:45:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:48.969 16:45:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:49.914 Creating new GPT entries in memory. 00:05:49.914 The operation has completed successfully. 00:05:49.914 16:45:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:49.914 16:45:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.914 16:45:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:49.914 16:45:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:49.914 16:45:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:50.859 The operation has completed successfully. 00:05:50.859 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:50.859 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.859 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1203199 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:51.120 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.121 16:45:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:54.428 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.690 16:45:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:57.999 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.000 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:58.000 16:45:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:58.000 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:58.000 00:05:58.000 real 0m10.218s 00:05:58.000 user 0m2.533s 00:05:58.000 sys 0m4.720s 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.000 16:45:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:58.000 ************************************ 00:05:58.000 END TEST dm_mount 00:05:58.000 ************************************ 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.261 16:45:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:58.523 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:58.523 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:58.523 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:58.523 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.523 16:45:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:58.523 00:05:58.523 real 0m28.062s 00:05:58.523 user 0m8.267s 00:05:58.523 sys 0m14.544s 00:05:58.523 16:45:18 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.523 16:45:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:58.523 ************************************ 00:05:58.523 END TEST devices 00:05:58.523 ************************************ 00:05:58.523 00:05:58.523 real 1m32.809s 00:05:58.523 user 0m30.306s 00:05:58.523 sys 0m53.411s 00:05:58.523 16:45:18 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.523 16:45:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:58.523 ************************************ 00:05:58.523 END TEST setup.sh 00:05:58.523 ************************************ 00:05:58.523 16:45:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:01.831 Hugepages 00:06:01.831 node hugesize free / total 00:06:01.831 node0 1048576kB 0 / 0 00:06:01.831 node0 2048kB 2048 / 2048 00:06:01.831 node1 1048576kB 0 / 0 00:06:01.831 node1 2048kB 0 / 0 00:06:01.831 00:06:01.831 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:01.831 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:01.831 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:02.092 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:02.092 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:02.092 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:02.092 16:45:22 -- spdk/autotest.sh@130 -- # uname -s 00:06:02.092 16:45:22 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:02.092 16:45:22 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:02.092 16:45:22 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:05.397 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:05.397 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:07.312 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:07.312 16:45:27 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:08.700 16:45:28 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:08.700 16:45:28 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:08.700 16:45:28 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:08.701 16:45:28 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:08.701 16:45:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:08.701 16:45:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:08.701 16:45:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:08.701 16:45:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:08.701 16:45:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:08.701 16:45:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:08.701 16:45:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:08.701 16:45:28 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:12.006 Waiting for block devices as requested 00:06:12.006 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:12.006 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:12.006 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:12.006 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:12.006 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:12.268 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:12.268 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:12.268 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:12.529 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:12.529 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:12.789 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:12.789 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:12.789 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:12.789 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:13.049 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:13.049 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:13.049 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:13.310 16:45:33 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:13.310 16:45:33 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:06:13.310 16:45:33 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:13.310 16:45:33 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:13.310 16:45:33 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:13.310 16:45:33 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:13.310 16:45:33 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:06:13.310 16:45:33 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:13.310 16:45:33 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:13.310 16:45:33 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:13.310 16:45:33 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:13.310 16:45:33 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:13.310 16:45:33 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:13.310 16:45:33 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:13.310 16:45:33 -- common/autotest_common.sh@1557 -- # continue 00:06:13.310 16:45:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:13.310 16:45:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.310 16:45:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.310 16:45:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:13.310 16:45:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.310 16:45:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.310 16:45:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:16.643 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:16.643 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:16.904 16:45:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:16.904 16:45:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.904 16:45:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.166 16:45:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:17.166 16:45:37 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:17.166 16:45:37 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:17.166 16:45:37 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:17.166 16:45:37 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:17.166 16:45:37 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:17.166 16:45:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:17.166 16:45:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:17.166 16:45:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.166 16:45:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:17.166 16:45:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:17.166 16:45:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:17.166 16:45:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:17.166 16:45:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:17.166 16:45:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:17.166 16:45:37 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:17.166 16:45:37 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:17.166 16:45:37 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:17.166 16:45:37 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:17.166 16:45:37 -- common/autotest_common.sh@1593 -- # return 0 00:06:17.166 16:45:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:17.166 16:45:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:17.166 16:45:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:17.166 16:45:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:17.166 16:45:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:17.166 16:45:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.166 16:45:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.166 16:45:37 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:17.166 16:45:37 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:17.166 16:45:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.166 16:45:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.166 16:45:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.166 ************************************ 00:06:17.166 START TEST env 00:06:17.166 ************************************ 00:06:17.166 16:45:37 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:17.166 * Looking for test storage... 00:06:17.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:17.166 16:45:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:17.166 16:45:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.166 16:45:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.166 16:45:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.428 ************************************ 00:06:17.428 START TEST env_memory 00:06:17.428 ************************************ 00:06:17.428 16:45:37 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:17.428 00:06:17.428 00:06:17.428 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.428 http://cunit.sourceforge.net/ 00:06:17.428 00:06:17.428 00:06:17.428 Suite: memory 00:06:17.428 Test: alloc and free memory map ...[2024-07-25 16:45:37.519845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:17.428 passed 00:06:17.428 Test: mem map translation ...[2024-07-25 16:45:37.545451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:17.428 [2024-07-25 16:45:37.545479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:17.428 [2024-07-25 16:45:37.545527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:17.428 [2024-07-25 16:45:37.545534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:17.428 passed 00:06:17.428 Test: mem map registration ...[2024-07-25 16:45:37.600940] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:17.428 [2024-07-25 16:45:37.600970] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:17.428 passed 00:06:17.428 Test: mem map adjacent registrations ...passed 00:06:17.428 00:06:17.428 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.428 suites 1 1 n/a 0 0 00:06:17.428 tests 4 4 4 0 0 00:06:17.428 asserts 152 152 152 0 n/a 00:06:17.428 00:06:17.428 Elapsed time = 0.194 seconds 00:06:17.428 00:06:17.428 real 0m0.209s 00:06:17.428 user 0m0.200s 00:06:17.428 sys 0m0.007s 00:06:17.428 16:45:37 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.428 16:45:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:17.428 ************************************ 00:06:17.428 END TEST env_memory 00:06:17.428 ************************************ 00:06:17.691 16:45:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:17.691 16:45:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.691 16:45:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.691 16:45:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.691 ************************************ 00:06:17.691 START TEST env_vtophys 00:06:17.691 ************************************ 00:06:17.691 16:45:37 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:17.691 EAL: lib.eal log level changed from notice to debug 00:06:17.691 EAL: Detected lcore 0 as core 0 on socket 0 00:06:17.691 EAL: Detected lcore 1 as core 1 on socket 0 00:06:17.691 EAL: Detected lcore 2 as core 2 on socket 0 00:06:17.691 EAL: Detected lcore 3 as core 3 on socket 0 00:06:17.691 EAL: Detected lcore 4 as core 4 on socket 0 00:06:17.691 EAL: Detected lcore 5 as core 5 on socket 0 00:06:17.691 EAL: Detected lcore 6 as core 6 on socket 0 00:06:17.691 EAL: Detected lcore 7 as core 7 on socket 0 00:06:17.691 EAL: Detected lcore 8 as core 8 on socket 0 00:06:17.691 EAL: Detected lcore 9 as core 9 on socket 0 00:06:17.691 EAL: Detected lcore 10 as core 10 on socket 0 00:06:17.691 EAL: Detected lcore 11 as core 11 on socket 0 00:06:17.691 EAL: Detected lcore 12 as core 12 on socket 0 00:06:17.691 EAL: Detected lcore 13 as core 13 on socket 0 00:06:17.691 EAL: Detected lcore 14 as core 14 on socket 0 00:06:17.691 EAL: Detected lcore 15 as core 15 on socket 0 00:06:17.691 EAL: Detected lcore 16 as core 16 on socket 0 00:06:17.691 EAL: Detected lcore 17 as core 17 on socket 0 00:06:17.691 EAL: Detected lcore 18 as core 18 on socket 0 00:06:17.691 EAL: Detected lcore 19 as core 19 on socket 0 00:06:17.691 EAL: Detected lcore 20 as core 20 on socket 0 00:06:17.691 EAL: Detected lcore 21 as core 21 on socket 0 00:06:17.691 EAL: Detected lcore 22 as core 22 on socket 0 00:06:17.691 EAL: Detected lcore 23 as core 23 on socket 0 00:06:17.691 EAL: Detected lcore 24 as core 24 on socket 0 00:06:17.691 EAL: Detected lcore 25 as core 25 on socket 0 00:06:17.691 EAL: Detected lcore 26 as core 26 on socket 0 00:06:17.691 EAL: Detected lcore 27 as core 27 on socket 0 00:06:17.691 EAL: Detected lcore 28 as core 28 on socket 0 00:06:17.691 EAL: Detected lcore 29 as core 29 on socket 0 00:06:17.691 EAL: Detected lcore 30 as core 30 on socket 0 00:06:17.691 EAL: Detected lcore 31 as core 31 on socket 0 00:06:17.691 EAL: Detected lcore 32 as core 32 on socket 0 00:06:17.691 EAL: Detected lcore 33 as core 33 on socket 0 00:06:17.691 EAL: Detected lcore 34 as core 34 on socket 0 00:06:17.691 EAL: Detected lcore 35 as core 35 on socket 0 00:06:17.691 EAL: Detected lcore 36 as core 0 on socket 1 00:06:17.691 EAL: Detected lcore 37 as core 1 on socket 1 00:06:17.691 EAL: Detected lcore 38 as core 2 on socket 1 00:06:17.691 EAL: Detected lcore 39 as core 3 on socket 1 00:06:17.691 EAL: Detected lcore 40 as core 4 on socket 1 00:06:17.691 EAL: Detected lcore 41 as core 5 on socket 1 00:06:17.691 EAL: Detected lcore 42 as core 6 on socket 1 00:06:17.691 EAL: Detected lcore 43 as core 7 on socket 1 00:06:17.691 EAL: Detected lcore 44 as core 8 on socket 1 00:06:17.691 EAL: Detected lcore 45 as core 9 on socket 1 00:06:17.691 EAL: Detected lcore 46 as core 10 on socket 1 00:06:17.691 EAL: Detected lcore 47 as core 11 on socket 1 00:06:17.691 EAL: Detected lcore 48 as core 12 on socket 1 00:06:17.691 EAL: Detected lcore 49 as core 13 on socket 1 00:06:17.691 EAL: Detected lcore 50 as core 14 on socket 1 00:06:17.691 EAL: Detected lcore 51 as core 15 on socket 1 00:06:17.691 EAL: Detected lcore 52 as core 16 on socket 1 00:06:17.691 EAL: Detected lcore 53 as core 17 on socket 1 00:06:17.691 EAL: Detected lcore 54 as core 18 on socket 1 00:06:17.691 EAL: Detected lcore 55 as core 19 on socket 1 00:06:17.691 EAL: Detected lcore 56 as core 20 on socket 1 00:06:17.691 EAL: Detected lcore 57 as core 21 on socket 1 00:06:17.691 EAL: Detected lcore 58 as core 22 on socket 1 00:06:17.691 EAL: Detected lcore 59 as core 23 on socket 1 00:06:17.691 EAL: Detected lcore 60 as core 24 on socket 1 00:06:17.691 EAL: Detected lcore 61 as core 25 on socket 1 00:06:17.691 EAL: Detected lcore 62 as core 26 on socket 1 00:06:17.691 EAL: Detected lcore 63 as core 27 on socket 1 00:06:17.691 EAL: Detected lcore 64 as core 28 on socket 1 00:06:17.691 EAL: Detected lcore 65 as core 29 on socket 1 00:06:17.691 EAL: Detected lcore 66 as core 30 on socket 1 00:06:17.691 EAL: Detected lcore 67 as core 31 on socket 1 00:06:17.691 EAL: Detected lcore 68 as core 32 on socket 1 00:06:17.691 EAL: Detected lcore 69 as core 33 on socket 1 00:06:17.691 EAL: Detected lcore 70 as core 34 on socket 1 00:06:17.691 EAL: Detected lcore 71 as core 35 on socket 1 00:06:17.691 EAL: Detected lcore 72 as core 0 on socket 0 00:06:17.691 EAL: Detected lcore 73 as core 1 on socket 0 00:06:17.691 EAL: Detected lcore 74 as core 2 on socket 0 00:06:17.691 EAL: Detected lcore 75 as core 3 on socket 0 00:06:17.691 EAL: Detected lcore 76 as core 4 on socket 0 00:06:17.691 EAL: Detected lcore 77 as core 5 on socket 0 00:06:17.691 EAL: Detected lcore 78 as core 6 on socket 0 00:06:17.691 EAL: Detected lcore 79 as core 7 on socket 0 00:06:17.691 EAL: Detected lcore 80 as core 8 on socket 0 00:06:17.691 EAL: Detected lcore 81 as core 9 on socket 0 00:06:17.691 EAL: Detected lcore 82 as core 10 on socket 0 00:06:17.691 EAL: Detected lcore 83 as core 11 on socket 0 00:06:17.691 EAL: Detected lcore 84 as core 12 on socket 0 00:06:17.691 EAL: Detected lcore 85 as core 13 on socket 0 00:06:17.691 EAL: Detected lcore 86 as core 14 on socket 0 00:06:17.691 EAL: Detected lcore 87 as core 15 on socket 0 00:06:17.691 EAL: Detected lcore 88 as core 16 on socket 0 00:06:17.691 EAL: Detected lcore 89 as core 17 on socket 0 00:06:17.691 EAL: Detected lcore 90 as core 18 on socket 0 00:06:17.691 EAL: Detected lcore 91 as core 19 on socket 0 00:06:17.691 EAL: Detected lcore 92 as core 20 on socket 0 00:06:17.691 EAL: Detected lcore 93 as core 21 on socket 0 00:06:17.691 EAL: Detected lcore 94 as core 22 on socket 0 00:06:17.691 EAL: Detected lcore 95 as core 23 on socket 0 00:06:17.691 EAL: Detected lcore 96 as core 24 on socket 0 00:06:17.691 EAL: Detected lcore 97 as core 25 on socket 0 00:06:17.691 EAL: Detected lcore 98 as core 26 on socket 0 00:06:17.691 EAL: Detected lcore 99 as core 27 on socket 0 00:06:17.691 EAL: Detected lcore 100 as core 28 on socket 0 00:06:17.691 EAL: Detected lcore 101 as core 29 on socket 0 00:06:17.691 EAL: Detected lcore 102 as core 30 on socket 0 00:06:17.691 EAL: Detected lcore 103 as core 31 on socket 0 00:06:17.691 EAL: Detected lcore 104 as core 32 on socket 0 00:06:17.691 EAL: Detected lcore 105 as core 33 on socket 0 00:06:17.691 EAL: Detected lcore 106 as core 34 on socket 0 00:06:17.691 EAL: Detected lcore 107 as core 35 on socket 0 00:06:17.691 EAL: Detected lcore 108 as core 0 on socket 1 00:06:17.691 EAL: Detected lcore 109 as core 1 on socket 1 00:06:17.691 EAL: Detected lcore 110 as core 2 on socket 1 00:06:17.691 EAL: Detected lcore 111 as core 3 on socket 1 00:06:17.691 EAL: Detected lcore 112 as core 4 on socket 1 00:06:17.691 EAL: Detected lcore 113 as core 5 on socket 1 00:06:17.691 EAL: Detected lcore 114 as core 6 on socket 1 00:06:17.691 EAL: Detected lcore 115 as core 7 on socket 1 00:06:17.691 EAL: Detected lcore 116 as core 8 on socket 1 00:06:17.691 EAL: Detected lcore 117 as core 9 on socket 1 00:06:17.691 EAL: Detected lcore 118 as core 10 on socket 1 00:06:17.691 EAL: Detected lcore 119 as core 11 on socket 1 00:06:17.691 EAL: Detected lcore 120 as core 12 on socket 1 00:06:17.691 EAL: Detected lcore 121 as core 13 on socket 1 00:06:17.691 EAL: Detected lcore 122 as core 14 on socket 1 00:06:17.691 EAL: Detected lcore 123 as core 15 on socket 1 00:06:17.691 EAL: Detected lcore 124 as core 16 on socket 1 00:06:17.691 EAL: Detected lcore 125 as core 17 on socket 1 00:06:17.691 EAL: Detected lcore 126 as core 18 on socket 1 00:06:17.691 EAL: Detected lcore 127 as core 19 on socket 1 00:06:17.692 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:17.692 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:17.692 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:17.692 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:17.692 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:17.692 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:17.692 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:17.692 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:17.692 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:17.692 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:17.692 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:17.692 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:17.692 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:17.692 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:17.692 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:17.692 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:17.692 EAL: Maximum logical cores by configuration: 128 00:06:17.692 EAL: Detected CPU lcores: 128 00:06:17.692 EAL: Detected NUMA nodes: 2 00:06:17.692 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:17.692 EAL: Detected shared linkage of DPDK 00:06:17.692 EAL: No shared files mode enabled, IPC will be disabled 00:06:17.692 EAL: Bus pci wants IOVA as 'DC' 00:06:17.692 EAL: Buses did not request a specific IOVA mode. 00:06:17.692 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:17.692 EAL: Selected IOVA mode 'VA' 00:06:17.692 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.692 EAL: Probing VFIO support... 00:06:17.692 EAL: IOMMU type 1 (Type 1) is supported 00:06:17.692 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:17.692 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:17.692 EAL: VFIO support initialized 00:06:17.692 EAL: Ask a virtual area of 0x2e000 bytes 00:06:17.692 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:17.692 EAL: Setting up physically contiguous memory... 00:06:17.692 EAL: Setting maximum number of open files to 524288 00:06:17.692 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:17.692 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:17.692 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:17.692 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:17.692 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.692 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:17.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.692 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.692 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:17.692 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:17.692 EAL: Hugepages will be freed exactly as allocated. 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: TSC frequency is ~2400000 KHz 00:06:17.692 EAL: Main lcore 0 is ready (tid=7f49222c2a00;cpuset=[0]) 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 0 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 2MB 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:17.692 EAL: Mem event callback 'spdk:(nil)' registered 00:06:17.692 00:06:17.692 00:06:17.692 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.692 http://cunit.sourceforge.net/ 00:06:17.692 00:06:17.692 00:06:17.692 Suite: components_suite 00:06:17.692 Test: vtophys_malloc_test ...passed 00:06:17.692 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 4MB 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was shrunk by 4MB 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 6MB 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was shrunk by 6MB 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 10MB 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was shrunk by 10MB 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 18MB 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was shrunk by 18MB 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 34MB 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was shrunk by 34MB 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was expanded by 66MB 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.692 EAL: Heap on socket 0 was shrunk by 66MB 00:06:17.692 EAL: Trying to obtain current memory policy. 00:06:17.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.692 EAL: Restoring previous memory policy: 4 00:06:17.692 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.692 EAL: request: mp_malloc_sync 00:06:17.692 EAL: No shared files mode enabled, IPC is disabled 00:06:17.693 EAL: Heap on socket 0 was expanded by 130MB 00:06:17.693 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.693 EAL: request: mp_malloc_sync 00:06:17.693 EAL: No shared files mode enabled, IPC is disabled 00:06:17.693 EAL: Heap on socket 0 was shrunk by 130MB 00:06:17.693 EAL: Trying to obtain current memory policy. 00:06:17.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.954 EAL: Restoring previous memory policy: 4 00:06:17.954 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.954 EAL: request: mp_malloc_sync 00:06:17.954 EAL: No shared files mode enabled, IPC is disabled 00:06:17.954 EAL: Heap on socket 0 was expanded by 258MB 00:06:17.954 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.954 EAL: request: mp_malloc_sync 00:06:17.954 EAL: No shared files mode enabled, IPC is disabled 00:06:17.954 EAL: Heap on socket 0 was shrunk by 258MB 00:06:17.954 EAL: Trying to obtain current memory policy. 00:06:17.954 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.954 EAL: Restoring previous memory policy: 4 00:06:17.954 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.954 EAL: request: mp_malloc_sync 00:06:17.954 EAL: No shared files mode enabled, IPC is disabled 00:06:17.954 EAL: Heap on socket 0 was expanded by 514MB 00:06:17.954 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.954 EAL: request: mp_malloc_sync 00:06:17.954 EAL: No shared files mode enabled, IPC is disabled 00:06:17.954 EAL: Heap on socket 0 was shrunk by 514MB 00:06:17.954 EAL: Trying to obtain current memory policy. 00:06:17.954 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.215 EAL: Restoring previous memory policy: 4 00:06:18.215 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.215 EAL: request: mp_malloc_sync 00:06:18.215 EAL: No shared files mode enabled, IPC is disabled 00:06:18.215 EAL: Heap on socket 0 was expanded by 1026MB 00:06:18.215 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.477 EAL: request: mp_malloc_sync 00:06:18.477 EAL: No shared files mode enabled, IPC is disabled 00:06:18.477 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:18.477 passed 00:06:18.477 00:06:18.477 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.477 suites 1 1 n/a 0 0 00:06:18.477 tests 2 2 2 0 0 00:06:18.477 asserts 497 497 497 0 n/a 00:06:18.477 00:06:18.477 Elapsed time = 0.660 seconds 00:06:18.477 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.477 EAL: request: mp_malloc_sync 00:06:18.477 EAL: No shared files mode enabled, IPC is disabled 00:06:18.477 EAL: Heap on socket 0 was shrunk by 2MB 00:06:18.477 EAL: No shared files mode enabled, IPC is disabled 00:06:18.477 EAL: No shared files mode enabled, IPC is disabled 00:06:18.477 EAL: No shared files mode enabled, IPC is disabled 00:06:18.477 00:06:18.477 real 0m0.779s 00:06:18.477 user 0m0.412s 00:06:18.477 sys 0m0.345s 00:06:18.477 16:45:38 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.477 16:45:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 ************************************ 00:06:18.477 END TEST env_vtophys 00:06:18.477 ************************************ 00:06:18.477 16:45:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:18.477 16:45:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.477 16:45:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.477 16:45:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 ************************************ 00:06:18.477 START TEST env_pci 00:06:18.477 ************************************ 00:06:18.477 16:45:38 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:18.477 00:06:18.477 00:06:18.477 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.477 http://cunit.sourceforge.net/ 00:06:18.477 00:06:18.477 00:06:18.477 Suite: pci 00:06:18.477 Test: pci_hook ...[2024-07-25 16:45:38.615280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1214712 has claimed it 00:06:18.477 EAL: Cannot find device (10000:00:01.0) 00:06:18.477 EAL: Failed to attach device on primary process 00:06:18.477 passed 00:06:18.477 00:06:18.477 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.477 suites 1 1 n/a 0 0 00:06:18.477 tests 1 1 1 0 0 00:06:18.477 asserts 25 25 25 0 n/a 00:06:18.477 00:06:18.477 Elapsed time = 0.029 seconds 00:06:18.477 00:06:18.477 real 0m0.050s 00:06:18.477 user 0m0.015s 00:06:18.477 sys 0m0.035s 00:06:18.477 16:45:38 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.477 16:45:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 ************************************ 00:06:18.477 END TEST env_pci 00:06:18.477 ************************************ 00:06:18.477 16:45:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:18.477 16:45:38 env -- env/env.sh@15 -- # uname 00:06:18.477 16:45:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:18.477 16:45:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:18.477 16:45:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.477 16:45:38 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:18.478 16:45:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.478 16:45:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.478 ************************************ 00:06:18.478 START TEST env_dpdk_post_init 00:06:18.478 ************************************ 00:06:18.478 16:45:38 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.739 EAL: Detected CPU lcores: 128 00:06:18.739 EAL: Detected NUMA nodes: 2 00:06:18.739 EAL: Detected shared linkage of DPDK 00:06:18.739 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.739 EAL: Selected IOVA mode 'VA' 00:06:18.739 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.739 EAL: VFIO support initialized 00:06:18.739 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.739 EAL: Using IOMMU type 1 (Type 1) 00:06:18.739 EAL: Ignore mapping IO port bar(1) 00:06:19.000 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:19.000 EAL: Ignore mapping IO port bar(1) 00:06:19.262 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:19.262 EAL: Ignore mapping IO port bar(1) 00:06:19.262 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:19.524 EAL: Ignore mapping IO port bar(1) 00:06:19.524 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:19.785 EAL: Ignore mapping IO port bar(1) 00:06:19.785 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:20.047 EAL: Ignore mapping IO port bar(1) 00:06:20.047 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:20.047 EAL: Ignore mapping IO port bar(1) 00:06:20.308 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:20.308 EAL: Ignore mapping IO port bar(1) 00:06:20.569 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:20.828 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:20.829 EAL: Ignore mapping IO port bar(1) 00:06:20.829 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:21.088 EAL: Ignore mapping IO port bar(1) 00:06:21.088 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:21.349 EAL: Ignore mapping IO port bar(1) 00:06:21.349 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:21.609 EAL: Ignore mapping IO port bar(1) 00:06:21.609 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:21.609 EAL: Ignore mapping IO port bar(1) 00:06:21.869 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:21.869 EAL: Ignore mapping IO port bar(1) 00:06:22.130 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:22.130 EAL: Ignore mapping IO port bar(1) 00:06:22.391 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:22.391 EAL: Ignore mapping IO port bar(1) 00:06:22.391 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:22.391 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:22.391 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:22.651 Starting DPDK initialization... 00:06:22.651 Starting SPDK post initialization... 00:06:22.651 SPDK NVMe probe 00:06:22.651 Attaching to 0000:65:00.0 00:06:22.651 Attached to 0000:65:00.0 00:06:22.651 Cleaning up... 00:06:24.568 00:06:24.568 real 0m5.714s 00:06:24.568 user 0m0.189s 00:06:24.568 sys 0m0.068s 00:06:24.568 16:45:44 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.568 16:45:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 ************************************ 00:06:24.568 END TEST env_dpdk_post_init 00:06:24.568 ************************************ 00:06:24.568 16:45:44 env -- env/env.sh@26 -- # uname 00:06:24.568 16:45:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:24.568 16:45:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:24.568 16:45:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.568 16:45:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.568 16:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 ************************************ 00:06:24.568 START TEST env_mem_callbacks 00:06:24.568 ************************************ 00:06:24.568 16:45:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:24.568 EAL: Detected CPU lcores: 128 00:06:24.568 EAL: Detected NUMA nodes: 2 00:06:24.568 EAL: Detected shared linkage of DPDK 00:06:24.568 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:24.568 EAL: Selected IOVA mode 'VA' 00:06:24.568 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.568 EAL: VFIO support initialized 00:06:24.568 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:24.568 00:06:24.568 00:06:24.568 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.568 http://cunit.sourceforge.net/ 00:06:24.568 00:06:24.568 00:06:24.568 Suite: memory 00:06:24.568 Test: test ... 00:06:24.568 register 0x200000200000 2097152 00:06:24.568 malloc 3145728 00:06:24.568 register 0x200000400000 4194304 00:06:24.568 buf 0x200000500000 len 3145728 PASSED 00:06:24.568 malloc 64 00:06:24.568 buf 0x2000004fff40 len 64 PASSED 00:06:24.568 malloc 4194304 00:06:24.568 register 0x200000800000 6291456 00:06:24.568 buf 0x200000a00000 len 4194304 PASSED 00:06:24.568 free 0x200000500000 3145728 00:06:24.568 free 0x2000004fff40 64 00:06:24.568 unregister 0x200000400000 4194304 PASSED 00:06:24.568 free 0x200000a00000 4194304 00:06:24.568 unregister 0x200000800000 6291456 PASSED 00:06:24.568 malloc 8388608 00:06:24.568 register 0x200000400000 10485760 00:06:24.568 buf 0x200000600000 len 8388608 PASSED 00:06:24.568 free 0x200000600000 8388608 00:06:24.568 unregister 0x200000400000 10485760 PASSED 00:06:24.568 passed 00:06:24.568 00:06:24.568 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.568 suites 1 1 n/a 0 0 00:06:24.568 tests 1 1 1 0 0 00:06:24.568 asserts 15 15 15 0 n/a 00:06:24.568 00:06:24.568 Elapsed time = 0.008 seconds 00:06:24.568 00:06:24.568 real 0m0.065s 00:06:24.568 user 0m0.021s 00:06:24.568 sys 0m0.045s 00:06:24.568 16:45:44 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.568 16:45:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 ************************************ 00:06:24.568 END TEST env_mem_callbacks 00:06:24.568 ************************************ 00:06:24.568 00:06:24.568 real 0m7.300s 00:06:24.568 user 0m1.021s 00:06:24.568 sys 0m0.826s 00:06:24.568 16:45:44 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.568 16:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 ************************************ 00:06:24.568 END TEST env 00:06:24.568 ************************************ 00:06:24.568 16:45:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:24.568 16:45:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.568 16:45:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.568 16:45:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 ************************************ 00:06:24.568 START TEST rpc 00:06:24.568 ************************************ 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:24.568 * Looking for test storage... 00:06:24.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:24.568 16:45:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1216101 00:06:24.568 16:45:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.568 16:45:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:24.568 16:45:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1216101 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@831 -- # '[' -z 1216101 ']' 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.568 16:45:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.830 [2024-07-25 16:45:44.888840] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:24.830 [2024-07-25 16:45:44.888908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216101 ] 00:06:24.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.830 [2024-07-25 16:45:44.952959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.830 [2024-07-25 16:45:45.020413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:24.830 [2024-07-25 16:45:45.020452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1216101' to capture a snapshot of events at runtime. 00:06:24.830 [2024-07-25 16:45:45.020460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.830 [2024-07-25 16:45:45.020467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.830 [2024-07-25 16:45:45.020472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1216101 for offline analysis/debug. 00:06:24.830 [2024-07-25 16:45:45.020495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.402 16:45:45 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.402 16:45:45 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:25.402 16:45:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.402 16:45:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.402 16:45:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:25.402 16:45:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:25.402 16:45:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.402 16:45:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.402 16:45:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.664 ************************************ 00:06:25.664 START TEST rpc_integrity 00:06:25.664 ************************************ 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.664 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.664 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:25.664 { 00:06:25.664 "name": "Malloc0", 00:06:25.664 "aliases": [ 00:06:25.664 "5e81fefb-eadf-4dc6-8f18-261ab810e69d" 00:06:25.664 ], 00:06:25.664 "product_name": "Malloc disk", 00:06:25.664 "block_size": 512, 00:06:25.664 "num_blocks": 16384, 00:06:25.664 "uuid": "5e81fefb-eadf-4dc6-8f18-261ab810e69d", 00:06:25.664 "assigned_rate_limits": { 00:06:25.664 "rw_ios_per_sec": 0, 00:06:25.664 "rw_mbytes_per_sec": 0, 00:06:25.664 "r_mbytes_per_sec": 0, 00:06:25.664 "w_mbytes_per_sec": 0 00:06:25.664 }, 00:06:25.664 "claimed": false, 00:06:25.664 "zoned": false, 00:06:25.664 "supported_io_types": { 00:06:25.664 "read": true, 00:06:25.664 "write": true, 00:06:25.664 "unmap": true, 00:06:25.664 "flush": true, 00:06:25.664 "reset": true, 00:06:25.664 "nvme_admin": false, 00:06:25.664 "nvme_io": false, 00:06:25.664 "nvme_io_md": false, 00:06:25.665 "write_zeroes": true, 00:06:25.665 "zcopy": true, 00:06:25.665 "get_zone_info": false, 00:06:25.665 "zone_management": false, 00:06:25.665 "zone_append": false, 00:06:25.665 "compare": false, 00:06:25.665 "compare_and_write": false, 00:06:25.665 "abort": true, 00:06:25.665 "seek_hole": false, 00:06:25.665 "seek_data": false, 00:06:25.665 "copy": true, 00:06:25.665 "nvme_iov_md": false 00:06:25.665 }, 00:06:25.665 "memory_domains": [ 00:06:25.665 { 00:06:25.665 "dma_device_id": "system", 00:06:25.665 "dma_device_type": 1 00:06:25.665 }, 00:06:25.665 { 00:06:25.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.665 "dma_device_type": 2 00:06:25.665 } 00:06:25.665 ], 00:06:25.665 "driver_specific": {} 00:06:25.665 } 00:06:25.665 ]' 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.665 [2024-07-25 16:45:45.827521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:25.665 [2024-07-25 16:45:45.827554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.665 [2024-07-25 16:45:45.827566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1badd80 00:06:25.665 [2024-07-25 16:45:45.827573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.665 [2024-07-25 16:45:45.828910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.665 [2024-07-25 16:45:45.828931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:25.665 Passthru0 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:25.665 { 00:06:25.665 "name": "Malloc0", 00:06:25.665 "aliases": [ 00:06:25.665 "5e81fefb-eadf-4dc6-8f18-261ab810e69d" 00:06:25.665 ], 00:06:25.665 "product_name": "Malloc disk", 00:06:25.665 "block_size": 512, 00:06:25.665 "num_blocks": 16384, 00:06:25.665 "uuid": "5e81fefb-eadf-4dc6-8f18-261ab810e69d", 00:06:25.665 "assigned_rate_limits": { 00:06:25.665 "rw_ios_per_sec": 0, 00:06:25.665 "rw_mbytes_per_sec": 0, 00:06:25.665 "r_mbytes_per_sec": 0, 00:06:25.665 "w_mbytes_per_sec": 0 00:06:25.665 }, 00:06:25.665 "claimed": true, 00:06:25.665 "claim_type": "exclusive_write", 00:06:25.665 "zoned": false, 00:06:25.665 "supported_io_types": { 00:06:25.665 "read": true, 00:06:25.665 "write": true, 00:06:25.665 "unmap": true, 00:06:25.665 "flush": true, 00:06:25.665 "reset": true, 00:06:25.665 "nvme_admin": false, 00:06:25.665 "nvme_io": false, 00:06:25.665 "nvme_io_md": false, 00:06:25.665 "write_zeroes": true, 00:06:25.665 "zcopy": true, 00:06:25.665 "get_zone_info": false, 00:06:25.665 "zone_management": false, 00:06:25.665 "zone_append": false, 00:06:25.665 "compare": false, 00:06:25.665 "compare_and_write": false, 00:06:25.665 "abort": true, 00:06:25.665 "seek_hole": false, 00:06:25.665 "seek_data": false, 00:06:25.665 "copy": true, 00:06:25.665 "nvme_iov_md": false 00:06:25.665 }, 00:06:25.665 "memory_domains": [ 00:06:25.665 { 00:06:25.665 "dma_device_id": "system", 00:06:25.665 "dma_device_type": 1 00:06:25.665 }, 00:06:25.665 { 00:06:25.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.665 "dma_device_type": 2 00:06:25.665 } 00:06:25.665 ], 00:06:25.665 "driver_specific": {} 00:06:25.665 }, 00:06:25.665 { 00:06:25.665 "name": "Passthru0", 00:06:25.665 "aliases": [ 00:06:25.665 "08173d61-92a1-59e3-837f-9c341e83739a" 00:06:25.665 ], 00:06:25.665 "product_name": "passthru", 00:06:25.665 "block_size": 512, 00:06:25.665 "num_blocks": 16384, 00:06:25.665 "uuid": "08173d61-92a1-59e3-837f-9c341e83739a", 00:06:25.665 "assigned_rate_limits": { 00:06:25.665 "rw_ios_per_sec": 0, 00:06:25.665 "rw_mbytes_per_sec": 0, 00:06:25.665 "r_mbytes_per_sec": 0, 00:06:25.665 "w_mbytes_per_sec": 0 00:06:25.665 }, 00:06:25.665 "claimed": false, 00:06:25.665 "zoned": false, 00:06:25.665 "supported_io_types": { 00:06:25.665 "read": true, 00:06:25.665 "write": true, 00:06:25.665 "unmap": true, 00:06:25.665 "flush": true, 00:06:25.665 "reset": true, 00:06:25.665 "nvme_admin": false, 00:06:25.665 "nvme_io": false, 00:06:25.665 "nvme_io_md": false, 00:06:25.665 "write_zeroes": true, 00:06:25.665 "zcopy": true, 00:06:25.665 "get_zone_info": false, 00:06:25.665 "zone_management": false, 00:06:25.665 "zone_append": false, 00:06:25.665 "compare": false, 00:06:25.665 "compare_and_write": false, 00:06:25.665 "abort": true, 00:06:25.665 "seek_hole": false, 00:06:25.665 "seek_data": false, 00:06:25.665 "copy": true, 00:06:25.665 "nvme_iov_md": false 00:06:25.665 }, 00:06:25.665 "memory_domains": [ 00:06:25.665 { 00:06:25.665 "dma_device_id": "system", 00:06:25.665 "dma_device_type": 1 00:06:25.665 }, 00:06:25.665 { 00:06:25.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.665 "dma_device_type": 2 00:06:25.665 } 00:06:25.665 ], 00:06:25.665 "driver_specific": { 00:06:25.665 "passthru": { 00:06:25.665 "name": "Passthru0", 00:06:25.665 "base_bdev_name": "Malloc0" 00:06:25.665 } 00:06:25.665 } 00:06:25.665 } 00:06:25.665 ]' 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.665 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.665 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.937 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:25.937 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:25.937 16:45:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:25.937 00:06:25.937 real 0m0.296s 00:06:25.937 user 0m0.191s 00:06:25.937 sys 0m0.043s 00:06:25.937 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.937 16:45:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.937 ************************************ 00:06:25.937 END TEST rpc_integrity 00:06:25.937 ************************************ 00:06:25.937 16:45:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:25.937 16:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.937 16:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.937 16:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.937 ************************************ 00:06:25.937 START TEST rpc_plugins 00:06:25.937 ************************************ 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.937 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:25.937 { 00:06:25.937 "name": "Malloc1", 00:06:25.937 "aliases": [ 00:06:25.937 "b9b23d27-219d-4d5d-aaf4-0a656e7c89b1" 00:06:25.937 ], 00:06:25.937 "product_name": "Malloc disk", 00:06:25.937 "block_size": 4096, 00:06:25.937 "num_blocks": 256, 00:06:25.937 "uuid": "b9b23d27-219d-4d5d-aaf4-0a656e7c89b1", 00:06:25.937 "assigned_rate_limits": { 00:06:25.937 "rw_ios_per_sec": 0, 00:06:25.937 "rw_mbytes_per_sec": 0, 00:06:25.937 "r_mbytes_per_sec": 0, 00:06:25.937 "w_mbytes_per_sec": 0 00:06:25.937 }, 00:06:25.937 "claimed": false, 00:06:25.937 "zoned": false, 00:06:25.937 "supported_io_types": { 00:06:25.937 "read": true, 00:06:25.937 "write": true, 00:06:25.937 "unmap": true, 00:06:25.937 "flush": true, 00:06:25.937 "reset": true, 00:06:25.937 "nvme_admin": false, 00:06:25.937 "nvme_io": false, 00:06:25.937 "nvme_io_md": false, 00:06:25.937 "write_zeroes": true, 00:06:25.937 "zcopy": true, 00:06:25.937 "get_zone_info": false, 00:06:25.937 "zone_management": false, 00:06:25.937 "zone_append": false, 00:06:25.937 "compare": false, 00:06:25.937 "compare_and_write": false, 00:06:25.937 "abort": true, 00:06:25.937 "seek_hole": false, 00:06:25.937 "seek_data": false, 00:06:25.937 "copy": true, 00:06:25.937 "nvme_iov_md": false 00:06:25.937 }, 00:06:25.937 "memory_domains": [ 00:06:25.937 { 00:06:25.937 "dma_device_id": "system", 00:06:25.937 "dma_device_type": 1 00:06:25.937 }, 00:06:25.937 { 00:06:25.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.937 "dma_device_type": 2 00:06:25.937 } 00:06:25.937 ], 00:06:25.937 "driver_specific": {} 00:06:25.937 } 00:06:25.937 ]' 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:25.937 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.938 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.938 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:25.938 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:25.938 16:45:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:25.938 00:06:25.938 real 0m0.149s 00:06:25.938 user 0m0.094s 00:06:25.938 sys 0m0.018s 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.938 16:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 ************************************ 00:06:25.938 END TEST rpc_plugins 00:06:25.938 ************************************ 00:06:26.199 16:45:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:26.199 16:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.199 16:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.199 16:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.199 ************************************ 00:06:26.199 START TEST rpc_trace_cmd_test 00:06:26.199 ************************************ 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:26.199 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1216101", 00:06:26.199 "tpoint_group_mask": "0x8", 00:06:26.199 "iscsi_conn": { 00:06:26.199 "mask": "0x2", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "scsi": { 00:06:26.199 "mask": "0x4", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "bdev": { 00:06:26.199 "mask": "0x8", 00:06:26.199 "tpoint_mask": "0xffffffffffffffff" 00:06:26.199 }, 00:06:26.199 "nvmf_rdma": { 00:06:26.199 "mask": "0x10", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "nvmf_tcp": { 00:06:26.199 "mask": "0x20", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "ftl": { 00:06:26.199 "mask": "0x40", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "blobfs": { 00:06:26.199 "mask": "0x80", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "dsa": { 00:06:26.199 "mask": "0x200", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "thread": { 00:06:26.199 "mask": "0x400", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "nvme_pcie": { 00:06:26.199 "mask": "0x800", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "iaa": { 00:06:26.199 "mask": "0x1000", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "nvme_tcp": { 00:06:26.199 "mask": "0x2000", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "bdev_nvme": { 00:06:26.199 "mask": "0x4000", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 }, 00:06:26.199 "sock": { 00:06:26.199 "mask": "0x8000", 00:06:26.199 "tpoint_mask": "0x0" 00:06:26.199 } 00:06:26.199 }' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:26.199 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:26.461 16:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:26.461 00:06:26.461 real 0m0.233s 00:06:26.461 user 0m0.189s 00:06:26.461 sys 0m0.034s 00:06:26.461 16:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.461 16:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.461 ************************************ 00:06:26.461 END TEST rpc_trace_cmd_test 00:06:26.461 ************************************ 00:06:26.461 16:45:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:26.461 16:45:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:26.461 16:45:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:26.461 16:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.461 16:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.461 16:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.461 ************************************ 00:06:26.461 START TEST rpc_daemon_integrity 00:06:26.461 ************************************ 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:26.461 { 00:06:26.461 "name": "Malloc2", 00:06:26.461 "aliases": [ 00:06:26.461 "953b06be-0fbc-489e-a025-3f04fa0fbc7e" 00:06:26.461 ], 00:06:26.461 "product_name": "Malloc disk", 00:06:26.461 "block_size": 512, 00:06:26.461 "num_blocks": 16384, 00:06:26.461 "uuid": "953b06be-0fbc-489e-a025-3f04fa0fbc7e", 00:06:26.461 "assigned_rate_limits": { 00:06:26.461 "rw_ios_per_sec": 0, 00:06:26.461 "rw_mbytes_per_sec": 0, 00:06:26.461 "r_mbytes_per_sec": 0, 00:06:26.461 "w_mbytes_per_sec": 0 00:06:26.461 }, 00:06:26.461 "claimed": false, 00:06:26.461 "zoned": false, 00:06:26.461 "supported_io_types": { 00:06:26.461 "read": true, 00:06:26.461 "write": true, 00:06:26.461 "unmap": true, 00:06:26.461 "flush": true, 00:06:26.461 "reset": true, 00:06:26.461 "nvme_admin": false, 00:06:26.461 "nvme_io": false, 00:06:26.461 "nvme_io_md": false, 00:06:26.461 "write_zeroes": true, 00:06:26.461 "zcopy": true, 00:06:26.461 "get_zone_info": false, 00:06:26.461 "zone_management": false, 00:06:26.461 "zone_append": false, 00:06:26.461 "compare": false, 00:06:26.461 "compare_and_write": false, 00:06:26.461 "abort": true, 00:06:26.461 "seek_hole": false, 00:06:26.461 "seek_data": false, 00:06:26.461 "copy": true, 00:06:26.461 "nvme_iov_md": false 00:06:26.461 }, 00:06:26.461 "memory_domains": [ 00:06:26.461 { 00:06:26.461 "dma_device_id": "system", 00:06:26.461 "dma_device_type": 1 00:06:26.461 }, 00:06:26.461 { 00:06:26.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.461 "dma_device_type": 2 00:06:26.461 } 00:06:26.461 ], 00:06:26.461 "driver_specific": {} 00:06:26.461 } 00:06:26.461 ]' 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:26.461 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.462 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.462 [2024-07-25 16:45:46.725945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:26.462 [2024-07-25 16:45:46.725974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.462 [2024-07-25 16:45:46.725986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baea90 00:06:26.462 [2024-07-25 16:45:46.725992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.462 [2024-07-25 16:45:46.727209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.462 [2024-07-25 16:45:46.727230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.462 Passthru0 00:06:26.462 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.462 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.462 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.462 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.724 { 00:06:26.724 "name": "Malloc2", 00:06:26.724 "aliases": [ 00:06:26.724 "953b06be-0fbc-489e-a025-3f04fa0fbc7e" 00:06:26.724 ], 00:06:26.724 "product_name": "Malloc disk", 00:06:26.724 "block_size": 512, 00:06:26.724 "num_blocks": 16384, 00:06:26.724 "uuid": "953b06be-0fbc-489e-a025-3f04fa0fbc7e", 00:06:26.724 "assigned_rate_limits": { 00:06:26.724 "rw_ios_per_sec": 0, 00:06:26.724 "rw_mbytes_per_sec": 0, 00:06:26.724 "r_mbytes_per_sec": 0, 00:06:26.724 "w_mbytes_per_sec": 0 00:06:26.724 }, 00:06:26.724 "claimed": true, 00:06:26.724 "claim_type": "exclusive_write", 00:06:26.724 "zoned": false, 00:06:26.724 "supported_io_types": { 00:06:26.724 "read": true, 00:06:26.724 "write": true, 00:06:26.724 "unmap": true, 00:06:26.724 "flush": true, 00:06:26.724 "reset": true, 00:06:26.724 "nvme_admin": false, 00:06:26.724 "nvme_io": false, 00:06:26.724 "nvme_io_md": false, 00:06:26.724 "write_zeroes": true, 00:06:26.724 "zcopy": true, 00:06:26.724 "get_zone_info": false, 00:06:26.724 "zone_management": false, 00:06:26.724 "zone_append": false, 00:06:26.724 "compare": false, 00:06:26.724 "compare_and_write": false, 00:06:26.724 "abort": true, 00:06:26.724 "seek_hole": false, 00:06:26.724 "seek_data": false, 00:06:26.724 "copy": true, 00:06:26.724 "nvme_iov_md": false 00:06:26.724 }, 00:06:26.724 "memory_domains": [ 00:06:26.724 { 00:06:26.724 "dma_device_id": "system", 00:06:26.724 "dma_device_type": 1 00:06:26.724 }, 00:06:26.724 { 00:06:26.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.724 "dma_device_type": 2 00:06:26.724 } 00:06:26.724 ], 00:06:26.724 "driver_specific": {} 00:06:26.724 }, 00:06:26.724 { 00:06:26.724 "name": "Passthru0", 00:06:26.724 "aliases": [ 00:06:26.724 "9dcbdc86-5afb-5429-9c2c-888039e6a382" 00:06:26.724 ], 00:06:26.724 "product_name": "passthru", 00:06:26.724 "block_size": 512, 00:06:26.724 "num_blocks": 16384, 00:06:26.724 "uuid": "9dcbdc86-5afb-5429-9c2c-888039e6a382", 00:06:26.724 "assigned_rate_limits": { 00:06:26.724 "rw_ios_per_sec": 0, 00:06:26.724 "rw_mbytes_per_sec": 0, 00:06:26.724 "r_mbytes_per_sec": 0, 00:06:26.724 "w_mbytes_per_sec": 0 00:06:26.724 }, 00:06:26.724 "claimed": false, 00:06:26.724 "zoned": false, 00:06:26.724 "supported_io_types": { 00:06:26.724 "read": true, 00:06:26.724 "write": true, 00:06:26.724 "unmap": true, 00:06:26.724 "flush": true, 00:06:26.724 "reset": true, 00:06:26.724 "nvme_admin": false, 00:06:26.724 "nvme_io": false, 00:06:26.724 "nvme_io_md": false, 00:06:26.724 "write_zeroes": true, 00:06:26.724 "zcopy": true, 00:06:26.724 "get_zone_info": false, 00:06:26.724 "zone_management": false, 00:06:26.724 "zone_append": false, 00:06:26.724 "compare": false, 00:06:26.724 "compare_and_write": false, 00:06:26.724 "abort": true, 00:06:26.724 "seek_hole": false, 00:06:26.724 "seek_data": false, 00:06:26.724 "copy": true, 00:06:26.724 "nvme_iov_md": false 00:06:26.724 }, 00:06:26.724 "memory_domains": [ 00:06:26.724 { 00:06:26.724 "dma_device_id": "system", 00:06:26.724 "dma_device_type": 1 00:06:26.724 }, 00:06:26.724 { 00:06:26.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.724 "dma_device_type": 2 00:06:26.724 } 00:06:26.724 ], 00:06:26.724 "driver_specific": { 00:06:26.724 "passthru": { 00:06:26.724 "name": "Passthru0", 00:06:26.724 "base_bdev_name": "Malloc2" 00:06:26.724 } 00:06:26.724 } 00:06:26.724 } 00:06:26.724 ]' 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:26.724 00:06:26.724 real 0m0.286s 00:06:26.724 user 0m0.180s 00:06:26.724 sys 0m0.044s 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.724 16:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 ************************************ 00:06:26.724 END TEST rpc_daemon_integrity 00:06:26.724 ************************************ 00:06:26.724 16:45:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:26.724 16:45:46 rpc -- rpc/rpc.sh@84 -- # killprocess 1216101 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 1216101 ']' 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@954 -- # kill -0 1216101 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@955 -- # uname 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1216101 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1216101' 00:06:26.724 killing process with pid 1216101 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@969 -- # kill 1216101 00:06:26.724 16:45:46 rpc -- common/autotest_common.sh@974 -- # wait 1216101 00:06:26.986 00:06:26.986 real 0m2.457s 00:06:26.986 user 0m3.206s 00:06:26.986 sys 0m0.716s 00:06:26.986 16:45:47 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.986 16:45:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.986 ************************************ 00:06:26.986 END TEST rpc 00:06:26.986 ************************************ 00:06:26.986 16:45:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:26.986 16:45:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.986 16:45:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.986 16:45:47 -- common/autotest_common.sh@10 -- # set +x 00:06:26.986 ************************************ 00:06:26.986 START TEST skip_rpc 00:06:26.986 ************************************ 00:06:26.986 16:45:47 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:27.248 * Looking for test storage... 00:06:27.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:27.248 16:45:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:27.248 16:45:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:27.248 16:45:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:27.248 16:45:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.248 16:45:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.248 16:45:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.248 ************************************ 00:06:27.248 START TEST skip_rpc 00:06:27.248 ************************************ 00:06:27.248 16:45:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:27.248 16:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1216679 00:06:27.248 16:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.248 16:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:27.248 16:45:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:27.248 [2024-07-25 16:45:47.437321] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:27.248 [2024-07-25 16:45:47.437376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216679 ] 00:06:27.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.248 [2024-07-25 16:45:47.499107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.508 [2024-07-25 16:45:47.570912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1216679 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1216679 ']' 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1216679 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1216679 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1216679' 00:06:32.798 killing process with pid 1216679 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1216679 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1216679 00:06:32.798 00:06:32.798 real 0m5.269s 00:06:32.798 user 0m5.079s 00:06:32.798 sys 0m0.216s 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.798 16:45:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.798 ************************************ 00:06:32.799 END TEST skip_rpc 00:06:32.799 ************************************ 00:06:32.799 16:45:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:32.799 16:45:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.799 16:45:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.799 16:45:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.799 ************************************ 00:06:32.799 START TEST skip_rpc_with_json 00:06:32.799 ************************************ 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1217762 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1217762 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1217762 ']' 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.799 16:45:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.799 [2024-07-25 16:45:52.769636] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:32.799 [2024-07-25 16:45:52.769689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217762 ] 00:06:32.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.799 [2024-07-25 16:45:52.829373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.799 [2024-07-25 16:45:52.897866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.370 [2024-07-25 16:45:53.524837] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.370 request: 00:06:33.370 { 00:06:33.370 "trtype": "tcp", 00:06:33.370 "method": "nvmf_get_transports", 00:06:33.370 "req_id": 1 00:06:33.370 } 00:06:33.370 Got JSON-RPC error response 00:06:33.370 response: 00:06:33.370 { 00:06:33.370 "code": -19, 00:06:33.370 "message": "No such device" 00:06:33.370 } 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.370 [2024-07-25 16:45:53.532947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.370 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.631 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.631 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:33.631 { 00:06:33.631 "subsystems": [ 00:06:33.631 { 00:06:33.631 "subsystem": "vfio_user_target", 00:06:33.631 "config": null 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "keyring", 00:06:33.631 "config": [] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "iobuf", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "iobuf_set_options", 00:06:33.631 "params": { 00:06:33.631 "small_pool_count": 8192, 00:06:33.631 "large_pool_count": 1024, 00:06:33.631 "small_bufsize": 8192, 00:06:33.631 "large_bufsize": 135168 00:06:33.631 } 00:06:33.631 } 00:06:33.631 ] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "sock", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "sock_set_default_impl", 00:06:33.631 "params": { 00:06:33.631 "impl_name": "posix" 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "sock_impl_set_options", 00:06:33.631 "params": { 00:06:33.631 "impl_name": "ssl", 00:06:33.631 "recv_buf_size": 4096, 00:06:33.631 "send_buf_size": 4096, 00:06:33.631 "enable_recv_pipe": true, 00:06:33.631 "enable_quickack": false, 00:06:33.631 "enable_placement_id": 0, 00:06:33.631 "enable_zerocopy_send_server": true, 00:06:33.631 "enable_zerocopy_send_client": false, 00:06:33.631 "zerocopy_threshold": 0, 00:06:33.631 "tls_version": 0, 00:06:33.631 "enable_ktls": false 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "sock_impl_set_options", 00:06:33.631 "params": { 00:06:33.631 "impl_name": "posix", 00:06:33.631 "recv_buf_size": 2097152, 00:06:33.631 "send_buf_size": 2097152, 00:06:33.631 "enable_recv_pipe": true, 00:06:33.631 "enable_quickack": false, 00:06:33.631 "enable_placement_id": 0, 00:06:33.631 "enable_zerocopy_send_server": true, 00:06:33.631 "enable_zerocopy_send_client": false, 00:06:33.631 "zerocopy_threshold": 0, 00:06:33.631 "tls_version": 0, 00:06:33.631 "enable_ktls": false 00:06:33.631 } 00:06:33.631 } 00:06:33.631 ] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "vmd", 00:06:33.631 "config": [] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "accel", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "accel_set_options", 00:06:33.631 "params": { 00:06:33.631 "small_cache_size": 128, 00:06:33.631 "large_cache_size": 16, 00:06:33.631 "task_count": 2048, 00:06:33.631 "sequence_count": 2048, 00:06:33.631 "buf_count": 2048 00:06:33.631 } 00:06:33.631 } 00:06:33.631 ] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "bdev", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "bdev_set_options", 00:06:33.631 "params": { 00:06:33.631 "bdev_io_pool_size": 65535, 00:06:33.631 "bdev_io_cache_size": 256, 00:06:33.631 "bdev_auto_examine": true, 00:06:33.631 "iobuf_small_cache_size": 128, 00:06:33.631 "iobuf_large_cache_size": 16 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "bdev_raid_set_options", 00:06:33.631 "params": { 00:06:33.631 "process_window_size_kb": 1024, 00:06:33.631 "process_max_bandwidth_mb_sec": 0 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "bdev_iscsi_set_options", 00:06:33.631 "params": { 00:06:33.631 "timeout_sec": 30 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "bdev_nvme_set_options", 00:06:33.631 "params": { 00:06:33.631 "action_on_timeout": "none", 00:06:33.631 "timeout_us": 0, 00:06:33.631 "timeout_admin_us": 0, 00:06:33.631 "keep_alive_timeout_ms": 10000, 00:06:33.631 "arbitration_burst": 0, 00:06:33.631 "low_priority_weight": 0, 00:06:33.631 "medium_priority_weight": 0, 00:06:33.631 "high_priority_weight": 0, 00:06:33.631 "nvme_adminq_poll_period_us": 10000, 00:06:33.631 "nvme_ioq_poll_period_us": 0, 00:06:33.631 "io_queue_requests": 0, 00:06:33.631 "delay_cmd_submit": true, 00:06:33.631 "transport_retry_count": 4, 00:06:33.631 "bdev_retry_count": 3, 00:06:33.631 "transport_ack_timeout": 0, 00:06:33.631 "ctrlr_loss_timeout_sec": 0, 00:06:33.631 "reconnect_delay_sec": 0, 00:06:33.631 "fast_io_fail_timeout_sec": 0, 00:06:33.631 "disable_auto_failback": false, 00:06:33.631 "generate_uuids": false, 00:06:33.631 "transport_tos": 0, 00:06:33.631 "nvme_error_stat": false, 00:06:33.631 "rdma_srq_size": 0, 00:06:33.631 "io_path_stat": false, 00:06:33.631 "allow_accel_sequence": false, 00:06:33.631 "rdma_max_cq_size": 0, 00:06:33.631 "rdma_cm_event_timeout_ms": 0, 00:06:33.631 "dhchap_digests": [ 00:06:33.631 "sha256", 00:06:33.631 "sha384", 00:06:33.631 "sha512" 00:06:33.631 ], 00:06:33.631 "dhchap_dhgroups": [ 00:06:33.631 "null", 00:06:33.631 "ffdhe2048", 00:06:33.631 "ffdhe3072", 00:06:33.631 "ffdhe4096", 00:06:33.631 "ffdhe6144", 00:06:33.631 "ffdhe8192" 00:06:33.631 ] 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "bdev_nvme_set_hotplug", 00:06:33.631 "params": { 00:06:33.631 "period_us": 100000, 00:06:33.631 "enable": false 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "bdev_wait_for_examine" 00:06:33.631 } 00:06:33.631 ] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "scsi", 00:06:33.631 "config": null 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "scheduler", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "framework_set_scheduler", 00:06:33.631 "params": { 00:06:33.631 "name": "static" 00:06:33.631 } 00:06:33.631 } 00:06:33.631 ] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "vhost_scsi", 00:06:33.631 "config": [] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "vhost_blk", 00:06:33.631 "config": [] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "ublk", 00:06:33.631 "config": [] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "nbd", 00:06:33.631 "config": [] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "nvmf", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "nvmf_set_config", 00:06:33.631 "params": { 00:06:33.631 "discovery_filter": "match_any", 00:06:33.631 "admin_cmd_passthru": { 00:06:33.631 "identify_ctrlr": false 00:06:33.631 } 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "nvmf_set_max_subsystems", 00:06:33.631 "params": { 00:06:33.631 "max_subsystems": 1024 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "nvmf_set_crdt", 00:06:33.631 "params": { 00:06:33.631 "crdt1": 0, 00:06:33.631 "crdt2": 0, 00:06:33.631 "crdt3": 0 00:06:33.631 } 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "method": "nvmf_create_transport", 00:06:33.631 "params": { 00:06:33.631 "trtype": "TCP", 00:06:33.631 "max_queue_depth": 128, 00:06:33.631 "max_io_qpairs_per_ctrlr": 127, 00:06:33.631 "in_capsule_data_size": 4096, 00:06:33.631 "max_io_size": 131072, 00:06:33.631 "io_unit_size": 131072, 00:06:33.631 "max_aq_depth": 128, 00:06:33.631 "num_shared_buffers": 511, 00:06:33.631 "buf_cache_size": 4294967295, 00:06:33.631 "dif_insert_or_strip": false, 00:06:33.631 "zcopy": false, 00:06:33.631 "c2h_success": true, 00:06:33.631 "sock_priority": 0, 00:06:33.631 "abort_timeout_sec": 1, 00:06:33.631 "ack_timeout": 0, 00:06:33.631 "data_wr_pool_size": 0 00:06:33.631 } 00:06:33.631 } 00:06:33.631 ] 00:06:33.631 }, 00:06:33.631 { 00:06:33.631 "subsystem": "iscsi", 00:06:33.631 "config": [ 00:06:33.631 { 00:06:33.631 "method": "iscsi_set_options", 00:06:33.631 "params": { 00:06:33.631 "node_base": "iqn.2016-06.io.spdk", 00:06:33.631 "max_sessions": 128, 00:06:33.631 "max_connections_per_session": 2, 00:06:33.631 "max_queue_depth": 64, 00:06:33.631 "default_time2wait": 2, 00:06:33.631 "default_time2retain": 20, 00:06:33.631 "first_burst_length": 8192, 00:06:33.631 "immediate_data": true, 00:06:33.631 "allow_duplicated_isid": false, 00:06:33.631 "error_recovery_level": 0, 00:06:33.631 "nop_timeout": 60, 00:06:33.631 "nop_in_interval": 30, 00:06:33.631 "disable_chap": false, 00:06:33.631 "require_chap": false, 00:06:33.631 "mutual_chap": false, 00:06:33.632 "chap_group": 0, 00:06:33.632 "max_large_datain_per_connection": 64, 00:06:33.632 "max_r2t_per_connection": 4, 00:06:33.632 "pdu_pool_size": 36864, 00:06:33.632 "immediate_data_pool_size": 16384, 00:06:33.632 "data_out_pool_size": 2048 00:06:33.632 } 00:06:33.632 } 00:06:33.632 ] 00:06:33.632 } 00:06:33.632 ] 00:06:33.632 } 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1217762 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1217762 ']' 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1217762 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1217762 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1217762' 00:06:33.632 killing process with pid 1217762 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1217762 00:06:33.632 16:45:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1217762 00:06:33.892 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1218060 00:06:33.892 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:33.892 16:45:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1218060 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1218060 ']' 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1218060 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1218060 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1218060' 00:06:39.227 killing process with pid 1218060 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1218060 00:06:39.227 16:45:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1218060 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:39.227 00:06:39.227 real 0m6.494s 00:06:39.227 user 0m6.355s 00:06:39.227 sys 0m0.492s 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.227 ************************************ 00:06:39.227 END TEST skip_rpc_with_json 00:06:39.227 ************************************ 00:06:39.227 16:45:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:39.227 16:45:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.227 16:45:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.227 16:45:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.227 ************************************ 00:06:39.227 START TEST skip_rpc_with_delay 00:06:39.227 ************************************ 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.227 [2024-07-25 16:45:59.327267] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:39.227 [2024-07-25 16:45:59.327353] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.227 00:06:39.227 real 0m0.067s 00:06:39.227 user 0m0.040s 00:06:39.227 sys 0m0.027s 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.227 16:45:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:39.227 ************************************ 00:06:39.227 END TEST skip_rpc_with_delay 00:06:39.227 ************************************ 00:06:39.227 16:45:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:39.227 16:45:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:39.227 16:45:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:39.227 16:45:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.227 16:45:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.227 16:45:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.227 ************************************ 00:06:39.227 START TEST exit_on_failed_rpc_init 00:06:39.227 ************************************ 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1219170 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1219170 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1219170 ']' 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.227 16:45:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.227 [2024-07-25 16:45:59.470251] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:39.227 [2024-07-25 16:45:59.470311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219170 ] 00:06:39.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.489 [2024-07-25 16:45:59.534367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.489 [2024-07-25 16:45:59.609181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:40.061 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.061 [2024-07-25 16:46:00.290462] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:40.061 [2024-07-25 16:46:00.290515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219471 ] 00:06:40.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.322 [2024-07-25 16:46:00.366951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.322 [2024-07-25 16:46:00.430777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.322 [2024-07-25 16:46:00.430838] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:40.322 [2024-07-25 16:46:00.430848] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:40.322 [2024-07-25 16:46:00.430854] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1219170 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1219170 ']' 00:06:40.322 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1219170 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1219170 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1219170' 00:06:40.323 killing process with pid 1219170 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1219170 00:06:40.323 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1219170 00:06:40.584 00:06:40.584 real 0m1.341s 00:06:40.584 user 0m1.570s 00:06:40.584 sys 0m0.367s 00:06:40.584 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.584 16:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.584 ************************************ 00:06:40.584 END TEST exit_on_failed_rpc_init 00:06:40.584 ************************************ 00:06:40.584 16:46:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:40.584 00:06:40.584 real 0m13.547s 00:06:40.584 user 0m13.168s 00:06:40.584 sys 0m1.375s 00:06:40.584 16:46:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.584 16:46:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.584 ************************************ 00:06:40.584 END TEST skip_rpc 00:06:40.584 ************************************ 00:06:40.584 16:46:00 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.584 16:46:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.584 16:46:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.584 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:06:40.845 ************************************ 00:06:40.845 START TEST rpc_client 00:06:40.845 ************************************ 00:06:40.845 16:46:00 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.845 * Looking for test storage... 00:06:40.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:40.845 16:46:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:40.845 OK 00:06:40.845 16:46:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:40.845 00:06:40.845 real 0m0.125s 00:06:40.845 user 0m0.057s 00:06:40.845 sys 0m0.076s 00:06:40.845 16:46:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.845 16:46:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:40.845 ************************************ 00:06:40.845 END TEST rpc_client 00:06:40.845 ************************************ 00:06:40.845 16:46:01 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:40.845 16:46:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.845 16:46:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.845 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:06:40.845 ************************************ 00:06:40.845 START TEST json_config 00:06:40.845 ************************************ 00:06:40.845 16:46:01 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.107 16:46:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.107 16:46:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.107 16:46:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.107 16:46:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.107 16:46:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.107 16:46:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.107 16:46:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:41.107 16:46:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@47 -- # : 0 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.107 16:46:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:41.107 INFO: JSON configuration test init 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.107 16:46:01 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:41.107 16:46:01 json_config -- json_config/common.sh@9 -- # local app=target 00:06:41.107 16:46:01 json_config -- json_config/common.sh@10 -- # shift 00:06:41.107 16:46:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.107 16:46:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.107 16:46:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.107 16:46:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.107 16:46:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.107 16:46:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1219701 00:06:41.107 16:46:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.107 Waiting for target to run... 00:06:41.107 16:46:01 json_config -- json_config/common.sh@25 -- # waitforlisten 1219701 /var/tmp/spdk_tgt.sock 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@831 -- # '[' -z 1219701 ']' 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.107 16:46:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:41.107 16:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.107 [2024-07-25 16:46:01.245075] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:41.107 [2024-07-25 16:46:01.245152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219701 ] 00:06:41.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.368 [2024-07-25 16:46:01.503579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.368 [2024-07-25 16:46:01.554053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.939 16:46:02 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.939 16:46:02 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:41.939 16:46:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:41.939 00:06:41.939 16:46:02 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:41.939 16:46:02 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:41.939 16:46:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.939 16:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.939 16:46:02 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:41.939 16:46:02 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:41.939 16:46:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.939 16:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.939 16:46:02 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:41.939 16:46:02 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:41.939 16:46:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:42.511 16:46:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.511 16:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:42.511 16:46:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@51 -- # sort 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:42.511 16:46:02 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:42.511 16:46:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.511 16:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:42.772 16:46:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.772 16:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:42.772 16:46:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:42.772 MallocForNvmf0 00:06:42.772 16:46:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:42.773 16:46:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:43.034 MallocForNvmf1 00:06:43.034 16:46:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:43.034 16:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:43.294 [2024-07-25 16:46:03.307364] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.294 16:46:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:43.294 16:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:43.294 16:46:03 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:43.294 16:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:43.555 16:46:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:43.555 16:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:43.555 16:46:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:43.555 16:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:43.815 [2024-07-25 16:46:03.957439] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:43.815 16:46:03 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:43.815 16:46:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.815 16:46:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.815 16:46:04 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:43.815 16:46:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.815 16:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.815 16:46:04 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:43.815 16:46:04 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:43.815 16:46:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:44.076 MallocBdevForConfigChangeCheck 00:06:44.076 16:46:04 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:44.076 16:46:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.076 16:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.076 16:46:04 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:44.076 16:46:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.337 16:46:04 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:44.337 INFO: shutting down applications... 00:06:44.337 16:46:04 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:44.337 16:46:04 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:44.337 16:46:04 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:44.337 16:46:04 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:44.909 Calling clear_iscsi_subsystem 00:06:44.909 Calling clear_nvmf_subsystem 00:06:44.909 Calling clear_nbd_subsystem 00:06:44.909 Calling clear_ublk_subsystem 00:06:44.909 Calling clear_vhost_blk_subsystem 00:06:44.909 Calling clear_vhost_scsi_subsystem 00:06:44.909 Calling clear_bdev_subsystem 00:06:44.909 16:46:04 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:44.909 16:46:04 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:44.909 16:46:04 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:44.909 16:46:04 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:44.909 16:46:04 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.909 16:46:04 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:45.170 16:46:05 json_config -- json_config/json_config.sh@349 -- # break 00:06:45.170 16:46:05 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:45.170 16:46:05 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:45.170 16:46:05 json_config -- json_config/common.sh@31 -- # local app=target 00:06:45.170 16:46:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:45.170 16:46:05 json_config -- json_config/common.sh@35 -- # [[ -n 1219701 ]] 00:06:45.170 16:46:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1219701 00:06:45.170 16:46:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:45.170 16:46:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.170 16:46:05 json_config -- json_config/common.sh@41 -- # kill -0 1219701 00:06:45.170 16:46:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:45.742 16:46:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:45.742 16:46:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.742 16:46:05 json_config -- json_config/common.sh@41 -- # kill -0 1219701 00:06:45.742 16:46:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:45.742 16:46:05 json_config -- json_config/common.sh@43 -- # break 00:06:45.742 16:46:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:45.742 16:46:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:45.742 SPDK target shutdown done 00:06:45.742 16:46:05 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:45.742 INFO: relaunching applications... 00:06:45.742 16:46:05 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.742 16:46:05 json_config -- json_config/common.sh@9 -- # local app=target 00:06:45.742 16:46:05 json_config -- json_config/common.sh@10 -- # shift 00:06:45.742 16:46:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:45.742 16:46:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:45.742 16:46:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:45.742 16:46:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.742 16:46:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.742 16:46:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1220726 00:06:45.742 16:46:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:45.742 Waiting for target to run... 00:06:45.742 16:46:05 json_config -- json_config/common.sh@25 -- # waitforlisten 1220726 /var/tmp/spdk_tgt.sock 00:06:45.742 16:46:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.742 16:46:05 json_config -- common/autotest_common.sh@831 -- # '[' -z 1220726 ']' 00:06:45.742 16:46:05 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:45.742 16:46:05 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.742 16:46:05 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:45.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:45.742 16:46:05 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.742 16:46:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.742 [2024-07-25 16:46:05.840923] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:45.742 [2024-07-25 16:46:05.840979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220726 ] 00:06:45.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.003 [2024-07-25 16:46:06.150527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.003 [2024-07-25 16:46:06.212389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.574 [2024-07-25 16:46:06.714199] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.574 [2024-07-25 16:46:06.746567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:46.574 16:46:06 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.574 16:46:06 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:46.574 16:46:06 json_config -- json_config/common.sh@26 -- # echo '' 00:06:46.574 00:06:46.574 16:46:06 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:46.574 16:46:06 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:46.574 INFO: Checking if target configuration is the same... 00:06:46.574 16:46:06 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:46.574 16:46:06 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:46.574 16:46:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:46.574 + '[' 2 -ne 2 ']' 00:06:46.574 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:46.574 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:46.574 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:46.574 +++ basename /dev/fd/62 00:06:46.574 ++ mktemp /tmp/62.XXX 00:06:46.574 + tmp_file_1=/tmp/62.QTe 00:06:46.574 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:46.574 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:46.574 + tmp_file_2=/tmp/spdk_tgt_config.json.HRK 00:06:46.574 + ret=0 00:06:46.574 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:46.834 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:47.095 + diff -u /tmp/62.QTe /tmp/spdk_tgt_config.json.HRK 00:06:47.095 + echo 'INFO: JSON config files are the same' 00:06:47.095 INFO: JSON config files are the same 00:06:47.095 + rm /tmp/62.QTe /tmp/spdk_tgt_config.json.HRK 00:06:47.095 + exit 0 00:06:47.096 16:46:07 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:47.096 16:46:07 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:47.096 INFO: changing configuration and checking if this can be detected... 00:06:47.096 16:46:07 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:47.096 16:46:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:47.096 16:46:07 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:47.096 16:46:07 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.096 16:46:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.096 + '[' 2 -ne 2 ']' 00:06:47.096 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:47.096 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:47.096 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:47.096 +++ basename /dev/fd/62 00:06:47.096 ++ mktemp /tmp/62.XXX 00:06:47.096 + tmp_file_1=/tmp/62.mzs 00:06:47.096 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.096 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:47.096 + tmp_file_2=/tmp/spdk_tgt_config.json.2zP 00:06:47.096 + ret=0 00:06:47.096 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:47.357 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:47.618 + diff -u /tmp/62.mzs /tmp/spdk_tgt_config.json.2zP 00:06:47.618 + ret=1 00:06:47.618 + echo '=== Start of file: /tmp/62.mzs ===' 00:06:47.618 + cat /tmp/62.mzs 00:06:47.618 + echo '=== End of file: /tmp/62.mzs ===' 00:06:47.618 + echo '' 00:06:47.618 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2zP ===' 00:06:47.618 + cat /tmp/spdk_tgt_config.json.2zP 00:06:47.618 + echo '=== End of file: /tmp/spdk_tgt_config.json.2zP ===' 00:06:47.618 + echo '' 00:06:47.618 + rm /tmp/62.mzs /tmp/spdk_tgt_config.json.2zP 00:06:47.618 + exit 1 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:47.618 INFO: configuration change detected. 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@321 -- # [[ -n 1220726 ]] 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 16:46:07 json_config -- json_config/json_config.sh@327 -- # killprocess 1220726 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@950 -- # '[' -z 1220726 ']' 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@954 -- # kill -0 1220726 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@955 -- # uname 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1220726 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1220726' 00:06:47.618 killing process with pid 1220726 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@969 -- # kill 1220726 00:06:47.618 16:46:07 json_config -- common/autotest_common.sh@974 -- # wait 1220726 00:06:47.880 16:46:08 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.880 16:46:08 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:47.880 16:46:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.880 16:46:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.880 16:46:08 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:47.880 16:46:08 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:47.880 INFO: Success 00:06:47.880 00:06:47.880 real 0m7.036s 00:06:47.880 user 0m8.543s 00:06:47.880 sys 0m1.686s 00:06:47.880 16:46:08 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.880 16:46:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.880 ************************************ 00:06:47.880 END TEST json_config 00:06:47.880 ************************************ 00:06:47.880 16:46:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:47.880 16:46:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.880 16:46:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.880 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:48.142 ************************************ 00:06:48.142 START TEST json_config_extra_key 00:06:48.142 ************************************ 00:06:48.142 16:46:08 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:48.142 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.142 16:46:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.142 16:46:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.142 16:46:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.142 16:46:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.142 16:46:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.142 16:46:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.142 16:46:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:48.142 16:46:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.142 16:46:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.142 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:48.142 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:48.143 INFO: launching applications... 00:06:48.143 16:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1221449 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:48.143 Waiting for target to run... 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1221449 /var/tmp/spdk_tgt.sock 00:06:48.143 16:46:08 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1221449 ']' 00:06:48.143 16:46:08 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:48.143 16:46:08 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.143 16:46:08 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:48.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:48.143 16:46:08 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.143 16:46:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.143 16:46:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:48.143 [2024-07-25 16:46:08.304536] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:48.143 [2024-07-25 16:46:08.304606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221449 ] 00:06:48.143 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.404 [2024-07-25 16:46:08.604434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.404 [2024-07-25 16:46:08.661474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.995 16:46:09 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.995 16:46:09 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:48.995 16:46:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:48.995 00:06:48.995 16:46:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:48.995 INFO: shutting down applications... 00:06:48.995 16:46:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:48.995 16:46:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:48.995 16:46:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:48.995 16:46:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1221449 ]] 00:06:48.995 16:46:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1221449 00:06:48.995 16:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:48.996 16:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.996 16:46:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1221449 00:06:48.996 16:46:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1221449 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:49.568 16:46:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:49.568 SPDK target shutdown done 00:06:49.568 16:46:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:49.568 Success 00:06:49.568 00:06:49.568 real 0m1.398s 00:06:49.568 user 0m1.017s 00:06:49.568 sys 0m0.396s 00:06:49.568 16:46:09 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.568 16:46:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:49.568 ************************************ 00:06:49.568 END TEST json_config_extra_key 00:06:49.568 ************************************ 00:06:49.568 16:46:09 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:49.568 16:46:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.568 16:46:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.568 16:46:09 -- common/autotest_common.sh@10 -- # set +x 00:06:49.568 ************************************ 00:06:49.568 START TEST alias_rpc 00:06:49.568 ************************************ 00:06:49.568 16:46:09 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:49.568 * Looking for test storage... 00:06:49.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:49.568 16:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.568 16:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1221733 00:06:49.568 16:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1221733 00:06:49.568 16:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:49.568 16:46:09 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1221733 ']' 00:06:49.568 16:46:09 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.568 16:46:09 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.568 16:46:09 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.569 16:46:09 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.569 16:46:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.569 [2024-07-25 16:46:09.800396] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:49.569 [2024-07-25 16:46:09.800467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221733 ] 00:06:49.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.830 [2024-07-25 16:46:09.866878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.830 [2024-07-25 16:46:09.942252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.401 16:46:10 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.401 16:46:10 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.401 16:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:50.662 16:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1221733 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1221733 ']' 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1221733 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1221733 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1221733' 00:06:50.662 killing process with pid 1221733 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@969 -- # kill 1221733 00:06:50.662 16:46:10 alias_rpc -- common/autotest_common.sh@974 -- # wait 1221733 00:06:50.923 00:06:50.923 real 0m1.375s 00:06:50.923 user 0m1.534s 00:06:50.923 sys 0m0.364s 00:06:50.923 16:46:11 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.923 16:46:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.923 ************************************ 00:06:50.923 END TEST alias_rpc 00:06:50.923 ************************************ 00:06:50.923 16:46:11 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:50.923 16:46:11 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:50.923 16:46:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.923 16:46:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.923 16:46:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.923 ************************************ 00:06:50.923 START TEST spdkcli_tcp 00:06:50.923 ************************************ 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:50.923 * Looking for test storage... 00:06:50.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1221983 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1221983 00:06:50.923 16:46:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1221983 ']' 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.923 16:46:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.184 [2024-07-25 16:46:11.250322] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:51.184 [2024-07-25 16:46:11.250398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221983 ] 00:06:51.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.184 [2024-07-25 16:46:11.314766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.184 [2024-07-25 16:46:11.390798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.184 [2024-07-25 16:46:11.390800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.756 16:46:12 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.756 16:46:12 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:51.756 16:46:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1222291 00:06:51.756 16:46:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:51.756 16:46:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:52.017 [ 00:06:52.017 "bdev_malloc_delete", 00:06:52.017 "bdev_malloc_create", 00:06:52.017 "bdev_null_resize", 00:06:52.017 "bdev_null_delete", 00:06:52.017 "bdev_null_create", 00:06:52.017 "bdev_nvme_cuse_unregister", 00:06:52.017 "bdev_nvme_cuse_register", 00:06:52.017 "bdev_opal_new_user", 00:06:52.017 "bdev_opal_set_lock_state", 00:06:52.017 "bdev_opal_delete", 00:06:52.017 "bdev_opal_get_info", 00:06:52.017 "bdev_opal_create", 00:06:52.017 "bdev_nvme_opal_revert", 00:06:52.017 "bdev_nvme_opal_init", 00:06:52.017 "bdev_nvme_send_cmd", 00:06:52.017 "bdev_nvme_get_path_iostat", 00:06:52.017 "bdev_nvme_get_mdns_discovery_info", 00:06:52.017 "bdev_nvme_stop_mdns_discovery", 00:06:52.017 "bdev_nvme_start_mdns_discovery", 00:06:52.017 "bdev_nvme_set_multipath_policy", 00:06:52.017 "bdev_nvme_set_preferred_path", 00:06:52.017 "bdev_nvme_get_io_paths", 00:06:52.017 "bdev_nvme_remove_error_injection", 00:06:52.017 "bdev_nvme_add_error_injection", 00:06:52.017 "bdev_nvme_get_discovery_info", 00:06:52.017 "bdev_nvme_stop_discovery", 00:06:52.017 "bdev_nvme_start_discovery", 00:06:52.017 "bdev_nvme_get_controller_health_info", 00:06:52.017 "bdev_nvme_disable_controller", 00:06:52.017 "bdev_nvme_enable_controller", 00:06:52.017 "bdev_nvme_reset_controller", 00:06:52.017 "bdev_nvme_get_transport_statistics", 00:06:52.017 "bdev_nvme_apply_firmware", 00:06:52.017 "bdev_nvme_detach_controller", 00:06:52.017 "bdev_nvme_get_controllers", 00:06:52.017 "bdev_nvme_attach_controller", 00:06:52.017 "bdev_nvme_set_hotplug", 00:06:52.017 "bdev_nvme_set_options", 00:06:52.017 "bdev_passthru_delete", 00:06:52.017 "bdev_passthru_create", 00:06:52.017 "bdev_lvol_set_parent_bdev", 00:06:52.017 "bdev_lvol_set_parent", 00:06:52.017 "bdev_lvol_check_shallow_copy", 00:06:52.017 "bdev_lvol_start_shallow_copy", 00:06:52.017 "bdev_lvol_grow_lvstore", 00:06:52.017 "bdev_lvol_get_lvols", 00:06:52.017 "bdev_lvol_get_lvstores", 00:06:52.017 "bdev_lvol_delete", 00:06:52.017 "bdev_lvol_set_read_only", 00:06:52.017 "bdev_lvol_resize", 00:06:52.017 "bdev_lvol_decouple_parent", 00:06:52.017 "bdev_lvol_inflate", 00:06:52.017 "bdev_lvol_rename", 00:06:52.017 "bdev_lvol_clone_bdev", 00:06:52.017 "bdev_lvol_clone", 00:06:52.017 "bdev_lvol_snapshot", 00:06:52.017 "bdev_lvol_create", 00:06:52.017 "bdev_lvol_delete_lvstore", 00:06:52.017 "bdev_lvol_rename_lvstore", 00:06:52.017 "bdev_lvol_create_lvstore", 00:06:52.017 "bdev_raid_set_options", 00:06:52.017 "bdev_raid_remove_base_bdev", 00:06:52.017 "bdev_raid_add_base_bdev", 00:06:52.017 "bdev_raid_delete", 00:06:52.017 "bdev_raid_create", 00:06:52.017 "bdev_raid_get_bdevs", 00:06:52.017 "bdev_error_inject_error", 00:06:52.017 "bdev_error_delete", 00:06:52.017 "bdev_error_create", 00:06:52.017 "bdev_split_delete", 00:06:52.017 "bdev_split_create", 00:06:52.017 "bdev_delay_delete", 00:06:52.017 "bdev_delay_create", 00:06:52.017 "bdev_delay_update_latency", 00:06:52.017 "bdev_zone_block_delete", 00:06:52.017 "bdev_zone_block_create", 00:06:52.017 "blobfs_create", 00:06:52.017 "blobfs_detect", 00:06:52.017 "blobfs_set_cache_size", 00:06:52.017 "bdev_aio_delete", 00:06:52.017 "bdev_aio_rescan", 00:06:52.017 "bdev_aio_create", 00:06:52.017 "bdev_ftl_set_property", 00:06:52.017 "bdev_ftl_get_properties", 00:06:52.017 "bdev_ftl_get_stats", 00:06:52.017 "bdev_ftl_unmap", 00:06:52.017 "bdev_ftl_unload", 00:06:52.017 "bdev_ftl_delete", 00:06:52.017 "bdev_ftl_load", 00:06:52.017 "bdev_ftl_create", 00:06:52.017 "bdev_virtio_attach_controller", 00:06:52.017 "bdev_virtio_scsi_get_devices", 00:06:52.017 "bdev_virtio_detach_controller", 00:06:52.017 "bdev_virtio_blk_set_hotplug", 00:06:52.017 "bdev_iscsi_delete", 00:06:52.017 "bdev_iscsi_create", 00:06:52.017 "bdev_iscsi_set_options", 00:06:52.017 "accel_error_inject_error", 00:06:52.017 "ioat_scan_accel_module", 00:06:52.017 "dsa_scan_accel_module", 00:06:52.017 "iaa_scan_accel_module", 00:06:52.017 "vfu_virtio_create_scsi_endpoint", 00:06:52.017 "vfu_virtio_scsi_remove_target", 00:06:52.017 "vfu_virtio_scsi_add_target", 00:06:52.017 "vfu_virtio_create_blk_endpoint", 00:06:52.017 "vfu_virtio_delete_endpoint", 00:06:52.017 "keyring_file_remove_key", 00:06:52.017 "keyring_file_add_key", 00:06:52.017 "keyring_linux_set_options", 00:06:52.017 "iscsi_get_histogram", 00:06:52.017 "iscsi_enable_histogram", 00:06:52.017 "iscsi_set_options", 00:06:52.017 "iscsi_get_auth_groups", 00:06:52.017 "iscsi_auth_group_remove_secret", 00:06:52.017 "iscsi_auth_group_add_secret", 00:06:52.017 "iscsi_delete_auth_group", 00:06:52.017 "iscsi_create_auth_group", 00:06:52.017 "iscsi_set_discovery_auth", 00:06:52.017 "iscsi_get_options", 00:06:52.017 "iscsi_target_node_request_logout", 00:06:52.017 "iscsi_target_node_set_redirect", 00:06:52.017 "iscsi_target_node_set_auth", 00:06:52.017 "iscsi_target_node_add_lun", 00:06:52.017 "iscsi_get_stats", 00:06:52.017 "iscsi_get_connections", 00:06:52.017 "iscsi_portal_group_set_auth", 00:06:52.017 "iscsi_start_portal_group", 00:06:52.017 "iscsi_delete_portal_group", 00:06:52.017 "iscsi_create_portal_group", 00:06:52.017 "iscsi_get_portal_groups", 00:06:52.017 "iscsi_delete_target_node", 00:06:52.017 "iscsi_target_node_remove_pg_ig_maps", 00:06:52.017 "iscsi_target_node_add_pg_ig_maps", 00:06:52.017 "iscsi_create_target_node", 00:06:52.017 "iscsi_get_target_nodes", 00:06:52.017 "iscsi_delete_initiator_group", 00:06:52.017 "iscsi_initiator_group_remove_initiators", 00:06:52.017 "iscsi_initiator_group_add_initiators", 00:06:52.017 "iscsi_create_initiator_group", 00:06:52.017 "iscsi_get_initiator_groups", 00:06:52.017 "nvmf_set_crdt", 00:06:52.017 "nvmf_set_config", 00:06:52.017 "nvmf_set_max_subsystems", 00:06:52.017 "nvmf_stop_mdns_prr", 00:06:52.017 "nvmf_publish_mdns_prr", 00:06:52.017 "nvmf_subsystem_get_listeners", 00:06:52.017 "nvmf_subsystem_get_qpairs", 00:06:52.017 "nvmf_subsystem_get_controllers", 00:06:52.017 "nvmf_get_stats", 00:06:52.017 "nvmf_get_transports", 00:06:52.017 "nvmf_create_transport", 00:06:52.017 "nvmf_get_targets", 00:06:52.017 "nvmf_delete_target", 00:06:52.017 "nvmf_create_target", 00:06:52.017 "nvmf_subsystem_allow_any_host", 00:06:52.017 "nvmf_subsystem_remove_host", 00:06:52.017 "nvmf_subsystem_add_host", 00:06:52.017 "nvmf_ns_remove_host", 00:06:52.017 "nvmf_ns_add_host", 00:06:52.017 "nvmf_subsystem_remove_ns", 00:06:52.017 "nvmf_subsystem_add_ns", 00:06:52.017 "nvmf_subsystem_listener_set_ana_state", 00:06:52.017 "nvmf_discovery_get_referrals", 00:06:52.017 "nvmf_discovery_remove_referral", 00:06:52.017 "nvmf_discovery_add_referral", 00:06:52.017 "nvmf_subsystem_remove_listener", 00:06:52.017 "nvmf_subsystem_add_listener", 00:06:52.017 "nvmf_delete_subsystem", 00:06:52.017 "nvmf_create_subsystem", 00:06:52.017 "nvmf_get_subsystems", 00:06:52.017 "env_dpdk_get_mem_stats", 00:06:52.017 "nbd_get_disks", 00:06:52.017 "nbd_stop_disk", 00:06:52.017 "nbd_start_disk", 00:06:52.017 "ublk_recover_disk", 00:06:52.017 "ublk_get_disks", 00:06:52.017 "ublk_stop_disk", 00:06:52.017 "ublk_start_disk", 00:06:52.017 "ublk_destroy_target", 00:06:52.017 "ublk_create_target", 00:06:52.017 "virtio_blk_create_transport", 00:06:52.017 "virtio_blk_get_transports", 00:06:52.017 "vhost_controller_set_coalescing", 00:06:52.017 "vhost_get_controllers", 00:06:52.017 "vhost_delete_controller", 00:06:52.017 "vhost_create_blk_controller", 00:06:52.017 "vhost_scsi_controller_remove_target", 00:06:52.017 "vhost_scsi_controller_add_target", 00:06:52.017 "vhost_start_scsi_controller", 00:06:52.017 "vhost_create_scsi_controller", 00:06:52.017 "thread_set_cpumask", 00:06:52.017 "framework_get_governor", 00:06:52.017 "framework_get_scheduler", 00:06:52.017 "framework_set_scheduler", 00:06:52.017 "framework_get_reactors", 00:06:52.018 "thread_get_io_channels", 00:06:52.018 "thread_get_pollers", 00:06:52.018 "thread_get_stats", 00:06:52.018 "framework_monitor_context_switch", 00:06:52.018 "spdk_kill_instance", 00:06:52.018 "log_enable_timestamps", 00:06:52.018 "log_get_flags", 00:06:52.018 "log_clear_flag", 00:06:52.018 "log_set_flag", 00:06:52.018 "log_get_level", 00:06:52.018 "log_set_level", 00:06:52.018 "log_get_print_level", 00:06:52.018 "log_set_print_level", 00:06:52.018 "framework_enable_cpumask_locks", 00:06:52.018 "framework_disable_cpumask_locks", 00:06:52.018 "framework_wait_init", 00:06:52.018 "framework_start_init", 00:06:52.018 "scsi_get_devices", 00:06:52.018 "bdev_get_histogram", 00:06:52.018 "bdev_enable_histogram", 00:06:52.018 "bdev_set_qos_limit", 00:06:52.018 "bdev_set_qd_sampling_period", 00:06:52.018 "bdev_get_bdevs", 00:06:52.018 "bdev_reset_iostat", 00:06:52.018 "bdev_get_iostat", 00:06:52.018 "bdev_examine", 00:06:52.018 "bdev_wait_for_examine", 00:06:52.018 "bdev_set_options", 00:06:52.018 "notify_get_notifications", 00:06:52.018 "notify_get_types", 00:06:52.018 "accel_get_stats", 00:06:52.018 "accel_set_options", 00:06:52.018 "accel_set_driver", 00:06:52.018 "accel_crypto_key_destroy", 00:06:52.018 "accel_crypto_keys_get", 00:06:52.018 "accel_crypto_key_create", 00:06:52.018 "accel_assign_opc", 00:06:52.018 "accel_get_module_info", 00:06:52.018 "accel_get_opc_assignments", 00:06:52.018 "vmd_rescan", 00:06:52.018 "vmd_remove_device", 00:06:52.018 "vmd_enable", 00:06:52.018 "sock_get_default_impl", 00:06:52.018 "sock_set_default_impl", 00:06:52.018 "sock_impl_set_options", 00:06:52.018 "sock_impl_get_options", 00:06:52.018 "iobuf_get_stats", 00:06:52.018 "iobuf_set_options", 00:06:52.018 "keyring_get_keys", 00:06:52.018 "framework_get_pci_devices", 00:06:52.018 "framework_get_config", 00:06:52.018 "framework_get_subsystems", 00:06:52.018 "vfu_tgt_set_base_path", 00:06:52.018 "trace_get_info", 00:06:52.018 "trace_get_tpoint_group_mask", 00:06:52.018 "trace_disable_tpoint_group", 00:06:52.018 "trace_enable_tpoint_group", 00:06:52.018 "trace_clear_tpoint_mask", 00:06:52.018 "trace_set_tpoint_mask", 00:06:52.018 "spdk_get_version", 00:06:52.018 "rpc_get_methods" 00:06:52.018 ] 00:06:52.018 16:46:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.018 16:46:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:52.018 16:46:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1221983 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1221983 ']' 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1221983 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1221983 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1221983' 00:06:52.018 killing process with pid 1221983 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1221983 00:06:52.018 16:46:12 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1221983 00:06:52.279 00:06:52.279 real 0m1.403s 00:06:52.279 user 0m2.571s 00:06:52.279 sys 0m0.413s 00:06:52.279 16:46:12 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.279 16:46:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.279 ************************************ 00:06:52.279 END TEST spdkcli_tcp 00:06:52.279 ************************************ 00:06:52.279 16:46:12 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:52.279 16:46:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.279 16:46:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.279 16:46:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.539 ************************************ 00:06:52.539 START TEST dpdk_mem_utility 00:06:52.539 ************************************ 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:52.539 * Looking for test storage... 00:06:52.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:52.539 16:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:52.539 16:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1222358 00:06:52.539 16:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1222358 00:06:52.539 16:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1222358 ']' 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.539 16:46:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.539 [2024-07-25 16:46:12.714780] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:52.540 [2024-07-25 16:46:12.714831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222358 ] 00:06:52.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.540 [2024-07-25 16:46:12.776372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.801 [2024-07-25 16:46:12.841313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.373 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.373 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:53.373 16:46:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:53.373 16:46:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:53.373 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.373 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:53.373 { 00:06:53.373 "filename": "/tmp/spdk_mem_dump.txt" 00:06:53.373 } 00:06:53.373 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.373 16:46:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:53.373 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:53.373 1 heaps totaling size 814.000000 MiB 00:06:53.373 size: 814.000000 MiB heap id: 0 00:06:53.373 end heaps---------- 00:06:53.373 8 mempools totaling size 598.116089 MiB 00:06:53.373 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:53.373 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:53.373 size: 84.521057 MiB name: bdev_io_1222358 00:06:53.373 size: 51.011292 MiB name: evtpool_1222358 00:06:53.373 size: 50.003479 MiB name: msgpool_1222358 00:06:53.373 size: 21.763794 MiB name: PDU_Pool 00:06:53.373 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:53.373 size: 0.026123 MiB name: Session_Pool 00:06:53.373 end mempools------- 00:06:53.373 6 memzones totaling size 4.142822 MiB 00:06:53.373 size: 1.000366 MiB name: RG_ring_0_1222358 00:06:53.373 size: 1.000366 MiB name: RG_ring_1_1222358 00:06:53.373 size: 1.000366 MiB name: RG_ring_4_1222358 00:06:53.373 size: 1.000366 MiB name: RG_ring_5_1222358 00:06:53.373 size: 0.125366 MiB name: RG_ring_2_1222358 00:06:53.373 size: 0.015991 MiB name: RG_ring_3_1222358 00:06:53.373 end memzones------- 00:06:53.373 16:46:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:53.373 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:53.373 list of free elements. size: 12.519348 MiB 00:06:53.373 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:53.374 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:53.374 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:53.374 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:53.374 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:53.374 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:53.374 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:53.374 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:53.374 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:53.374 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:53.374 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:53.374 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:53.374 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:53.374 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:53.374 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:53.374 list of standard malloc elements. size: 199.218079 MiB 00:06:53.374 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:53.374 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:53.374 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:53.374 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:53.374 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:53.374 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:53.374 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:53.374 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:53.374 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:53.374 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:53.374 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:53.374 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:53.374 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:53.374 list of memzone associated elements. size: 602.262573 MiB 00:06:53.374 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:53.374 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:53.374 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:53.374 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:53.374 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:53.374 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1222358_0 00:06:53.374 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:53.374 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1222358_0 00:06:53.374 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:53.374 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1222358_0 00:06:53.374 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:53.374 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:53.374 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:53.374 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:53.374 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:53.374 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1222358 00:06:53.374 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:53.374 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1222358 00:06:53.374 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:53.374 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1222358 00:06:53.374 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:53.374 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:53.374 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:53.374 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:53.374 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:53.374 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:53.374 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:53.374 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:53.374 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:53.374 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1222358 00:06:53.374 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:53.374 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1222358 00:06:53.374 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:53.374 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1222358 00:06:53.374 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:53.374 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1222358 00:06:53.374 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:53.374 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1222358 00:06:53.374 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:53.374 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:53.374 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:53.374 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:53.374 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:53.374 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:53.374 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:53.374 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1222358 00:06:53.374 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:53.374 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:53.374 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:53.374 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:53.374 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:53.374 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1222358 00:06:53.374 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:53.374 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:53.374 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:53.374 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1222358 00:06:53.374 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:53.374 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1222358 00:06:53.374 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:53.374 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:53.374 16:46:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:53.374 16:46:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1222358 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1222358 ']' 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1222358 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1222358 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1222358' 00:06:53.374 killing process with pid 1222358 00:06:53.374 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1222358 00:06:53.375 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1222358 00:06:53.636 00:06:53.636 real 0m1.290s 00:06:53.636 user 0m1.380s 00:06:53.636 sys 0m0.349s 00:06:53.636 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.636 16:46:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:53.636 ************************************ 00:06:53.636 END TEST dpdk_mem_utility 00:06:53.636 ************************************ 00:06:53.636 16:46:13 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:53.636 16:46:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.636 16:46:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.636 16:46:13 -- common/autotest_common.sh@10 -- # set +x 00:06:53.898 ************************************ 00:06:53.898 START TEST event 00:06:53.898 ************************************ 00:06:53.898 16:46:13 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:53.898 * Looking for test storage... 00:06:53.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:53.898 16:46:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:53.898 16:46:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:53.898 16:46:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:53.898 16:46:14 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:53.898 16:46:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.898 16:46:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.898 ************************************ 00:06:53.898 START TEST event_perf 00:06:53.898 ************************************ 00:06:53.898 16:46:14 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:53.898 Running I/O for 1 seconds...[2024-07-25 16:46:14.083471] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:53.898 [2024-07-25 16:46:14.083568] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222755 ] 00:06:53.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.898 [2024-07-25 16:46:14.146719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.159 [2024-07-25 16:46:14.217342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.159 [2024-07-25 16:46:14.217456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.159 [2024-07-25 16:46:14.217611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.159 Running I/O for 1 seconds...[2024-07-25 16:46:14.217611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.106 00:06:55.106 lcore 0: 174750 00:06:55.106 lcore 1: 174749 00:06:55.106 lcore 2: 174750 00:06:55.106 lcore 3: 174754 00:06:55.106 done. 00:06:55.106 00:06:55.106 real 0m1.208s 00:06:55.106 user 0m4.128s 00:06:55.106 sys 0m0.076s 00:06:55.106 16:46:15 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.106 16:46:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.106 ************************************ 00:06:55.106 END TEST event_perf 00:06:55.106 ************************************ 00:06:55.106 16:46:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:55.106 16:46:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:55.106 16:46:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.106 16:46:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.106 ************************************ 00:06:55.106 START TEST event_reactor 00:06:55.106 ************************************ 00:06:55.106 16:46:15 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:55.106 [2024-07-25 16:46:15.367629] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:55.106 [2024-07-25 16:46:15.367731] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223106 ] 00:06:55.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.398 [2024-07-25 16:46:15.430831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.398 [2024-07-25 16:46:15.497284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.342 test_start 00:06:56.342 oneshot 00:06:56.342 tick 100 00:06:56.342 tick 100 00:06:56.342 tick 250 00:06:56.342 tick 100 00:06:56.342 tick 100 00:06:56.342 tick 100 00:06:56.342 tick 250 00:06:56.342 tick 500 00:06:56.342 tick 100 00:06:56.342 tick 100 00:06:56.342 tick 250 00:06:56.342 tick 100 00:06:56.342 tick 100 00:06:56.342 test_end 00:06:56.342 00:06:56.342 real 0m1.203s 00:06:56.342 user 0m1.129s 00:06:56.342 sys 0m0.070s 00:06:56.342 16:46:16 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.342 16:46:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:56.342 ************************************ 00:06:56.342 END TEST event_reactor 00:06:56.342 ************************************ 00:06:56.342 16:46:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:56.342 16:46:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:56.342 16:46:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.342 16:46:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.603 ************************************ 00:06:56.603 START TEST event_reactor_perf 00:06:56.603 ************************************ 00:06:56.603 16:46:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:56.603 [2024-07-25 16:46:16.644321] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:56.603 [2024-07-25 16:46:16.644425] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223358 ] 00:06:56.603 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.603 [2024-07-25 16:46:16.708414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.603 [2024-07-25 16:46:16.777443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.988 test_start 00:06:57.988 test_end 00:06:57.988 Performance: 366656 events per second 00:06:57.988 00:06:57.988 real 0m1.207s 00:06:57.988 user 0m1.133s 00:06:57.988 sys 0m0.071s 00:06:57.988 16:46:17 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.988 16:46:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.988 ************************************ 00:06:57.988 END TEST event_reactor_perf 00:06:57.988 ************************************ 00:06:57.988 16:46:17 event -- event/event.sh@49 -- # uname -s 00:06:57.988 16:46:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:57.988 16:46:17 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:57.988 16:46:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.988 16:46:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.988 16:46:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.988 ************************************ 00:06:57.989 START TEST event_scheduler 00:06:57.989 ************************************ 00:06:57.989 16:46:17 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:57.989 * Looking for test storage... 00:06:57.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:57.989 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:57.989 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1223601 00:06:57.989 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.989 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:57.989 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1223601 00:06:57.989 16:46:18 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1223601 ']' 00:06:57.989 16:46:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.989 16:46:18 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.989 16:46:18 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.989 16:46:18 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.989 16:46:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.989 [2024-07-25 16:46:18.068761] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:06:57.989 [2024-07-25 16:46:18.068831] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223601 ] 00:06:57.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.989 [2024-07-25 16:46:18.126784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.989 [2024-07-25 16:46:18.193931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.989 [2024-07-25 16:46:18.194093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.989 [2024-07-25 16:46:18.194253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.989 [2024-07-25 16:46:18.194255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:58.931 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 [2024-07-25 16:46:18.860319] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:58.931 [2024-07-25 16:46:18.860333] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:58.931 [2024-07-25 16:46:18.860341] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:58.931 [2024-07-25 16:46:18.860345] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:58.931 [2024-07-25 16:46:18.860349] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 [2024-07-25 16:46:18.918521] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 ************************************ 00:06:58.931 START TEST scheduler_create_thread 00:06:58.931 ************************************ 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 2 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 3 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 4 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 5 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 6 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 7 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.931 8 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:58.931 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.932 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.932 9 00:06:58.932 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.932 16:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:58.932 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.932 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.504 10 00:06:59.504 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.504 16:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:59.504 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.504 16:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.889 16:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.889 16:46:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:00.889 16:46:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:00.889 16:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.889 16:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.462 16:46:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.462 16:46:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:01.462 16:46:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.462 16:46:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.406 16:46:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.406 16:46:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:02.406 16:46:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:02.406 16:46:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.406 16:46:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.978 16:46:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.978 00:07:02.978 real 0m4.223s 00:07:02.978 user 0m0.026s 00:07:02.978 sys 0m0.006s 00:07:02.978 16:46:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.978 16:46:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.978 ************************************ 00:07:02.978 END TEST scheduler_create_thread 00:07:02.979 ************************************ 00:07:02.979 16:46:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:02.979 16:46:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1223601 00:07:02.979 16:46:23 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1223601 ']' 00:07:02.979 16:46:23 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1223601 00:07:02.979 16:46:23 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:02.979 16:46:23 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.979 16:46:23 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1223601 00:07:03.249 16:46:23 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:03.249 16:46:23 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:03.249 16:46:23 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1223601' 00:07:03.249 killing process with pid 1223601 00:07:03.249 16:46:23 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1223601 00:07:03.249 16:46:23 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1223601 00:07:03.249 [2024-07-25 16:46:23.459746] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:03.512 00:07:03.513 real 0m5.718s 00:07:03.513 user 0m12.738s 00:07:03.513 sys 0m0.382s 00:07:03.513 16:46:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.513 16:46:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.513 ************************************ 00:07:03.513 END TEST event_scheduler 00:07:03.513 ************************************ 00:07:03.513 16:46:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:03.513 16:46:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:03.513 16:46:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.513 16:46:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.513 16:46:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.513 ************************************ 00:07:03.513 START TEST app_repeat 00:07:03.513 ************************************ 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1224905 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1224905' 00:07:03.513 Process app_repeat pid: 1224905 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:03.513 spdk_app_start Round 0 00:07:03.513 16:46:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1224905 /var/tmp/spdk-nbd.sock 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1224905 ']' 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.513 16:46:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.513 [2024-07-25 16:46:23.752158] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:03.513 [2024-07-25 16:46:23.752234] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224905 ] 00:07:03.513 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.774 [2024-07-25 16:46:23.813928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.774 [2024-07-25 16:46:23.880009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.774 [2024-07-25 16:46:23.880012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.345 16:46:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.345 16:46:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:04.345 16:46:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.606 Malloc0 00:07:04.606 16:46:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.606 Malloc1 00:07:04.606 16:46:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.606 16:46:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.867 /dev/nbd0 00:07:04.867 16:46:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.867 16:46:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.867 1+0 records in 00:07:04.867 1+0 records out 00:07:04.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329586 s, 12.4 MB/s 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.867 16:46:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:04.867 16:46:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.867 16:46:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.867 16:46:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.129 /dev/nbd1 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.129 1+0 records in 00:07:05.129 1+0 records out 00:07:05.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268006 s, 15.3 MB/s 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.129 16:46:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.129 16:46:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.390 16:46:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.390 { 00:07:05.390 "nbd_device": "/dev/nbd0", 00:07:05.390 "bdev_name": "Malloc0" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd1", 00:07:05.391 "bdev_name": "Malloc1" 00:07:05.391 } 00:07:05.391 ]' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd0", 00:07:05.391 "bdev_name": "Malloc0" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd1", 00:07:05.391 "bdev_name": "Malloc1" 00:07:05.391 } 00:07:05.391 ]' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.391 /dev/nbd1' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.391 /dev/nbd1' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.391 256+0 records in 00:07:05.391 256+0 records out 00:07:05.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126511 s, 82.9 MB/s 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.391 256+0 records in 00:07:05.391 256+0 records out 00:07:05.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156688 s, 66.9 MB/s 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.391 256+0 records in 00:07:05.391 256+0 records out 00:07:05.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0351018 s, 29.9 MB/s 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.391 16:46:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.651 16:46:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.912 16:46:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.912 16:46:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.912 16:46:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.172 16:46:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.172 [2024-07-25 16:46:26.443648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.432 [2024-07-25 16:46:26.508107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.432 [2024-07-25 16:46:26.508109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.432 [2024-07-25 16:46:26.539502] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.432 [2024-07-25 16:46:26.539539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.732 16:46:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:09.732 16:46:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:09.732 spdk_app_start Round 1 00:07:09.732 16:46:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1224905 /var/tmp/spdk-nbd.sock 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1224905 ']' 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.732 16:46:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:09.732 16:46:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.732 Malloc0 00:07:09.732 16:46:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.732 Malloc1 00:07:09.732 16:46:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.732 16:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.733 16:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.733 16:46:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.733 /dev/nbd0 00:07:09.733 16:46:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.733 16:46:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.733 16:46:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:09.733 16:46:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:09.733 16:46:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.733 16:46:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.733 16:46:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.993 1+0 records in 00:07:09.993 1+0 records out 00:07:09.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239106 s, 17.1 MB/s 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:09.993 /dev/nbd1 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.993 1+0 records in 00:07:09.993 1+0 records out 00:07:09.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268842 s, 15.2 MB/s 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.993 16:46:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.993 16:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.254 { 00:07:10.254 "nbd_device": "/dev/nbd0", 00:07:10.254 "bdev_name": "Malloc0" 00:07:10.254 }, 00:07:10.254 { 00:07:10.254 "nbd_device": "/dev/nbd1", 00:07:10.254 "bdev_name": "Malloc1" 00:07:10.254 } 00:07:10.254 ]' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.254 { 00:07:10.254 "nbd_device": "/dev/nbd0", 00:07:10.254 "bdev_name": "Malloc0" 00:07:10.254 }, 00:07:10.254 { 00:07:10.254 "nbd_device": "/dev/nbd1", 00:07:10.254 "bdev_name": "Malloc1" 00:07:10.254 } 00:07:10.254 ]' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.254 /dev/nbd1' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.254 /dev/nbd1' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.254 16:46:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.254 256+0 records in 00:07:10.254 256+0 records out 00:07:10.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124587 s, 84.2 MB/s 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.255 256+0 records in 00:07:10.255 256+0 records out 00:07:10.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398198 s, 26.3 MB/s 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.255 256+0 records in 00:07:10.255 256+0 records out 00:07:10.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171911 s, 61.0 MB/s 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.255 16:46:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.515 16:46:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.774 16:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.774 16:46:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.774 16:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.774 16:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.034 16:46:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.034 16:46:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.034 16:46:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.295 [2024-07-25 16:46:31.361187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.295 [2024-07-25 16:46:31.425082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.295 [2024-07-25 16:46:31.425084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.295 [2024-07-25 16:46:31.457366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.295 [2024-07-25 16:46:31.457400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.595 16:46:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.595 16:46:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:14.595 spdk_app_start Round 2 00:07:14.595 16:46:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1224905 /var/tmp/spdk-nbd.sock 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1224905 ']' 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:14.595 16:46:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.595 Malloc0 00:07:14.595 16:46:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.595 Malloc1 00:07:14.595 16:46:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:14.595 /dev/nbd0 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.595 16:46:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.595 16:46:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.857 1+0 records in 00:07:14.857 1+0 records out 00:07:14.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290165 s, 14.1 MB/s 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.857 16:46:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:14.857 16:46:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.857 16:46:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.857 16:46:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:14.857 /dev/nbd1 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.857 1+0 records in 00:07:14.857 1+0 records out 00:07:14.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232046 s, 17.7 MB/s 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.857 16:46:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.857 16:46:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.118 { 00:07:15.118 "nbd_device": "/dev/nbd0", 00:07:15.118 "bdev_name": "Malloc0" 00:07:15.118 }, 00:07:15.118 { 00:07:15.118 "nbd_device": "/dev/nbd1", 00:07:15.118 "bdev_name": "Malloc1" 00:07:15.118 } 00:07:15.118 ]' 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.118 { 00:07:15.118 "nbd_device": "/dev/nbd0", 00:07:15.118 "bdev_name": "Malloc0" 00:07:15.118 }, 00:07:15.118 { 00:07:15.118 "nbd_device": "/dev/nbd1", 00:07:15.118 "bdev_name": "Malloc1" 00:07:15.118 } 00:07:15.118 ]' 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.118 /dev/nbd1' 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.118 /dev/nbd1' 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.118 16:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:15.119 256+0 records in 00:07:15.119 256+0 records out 00:07:15.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118643 s, 88.4 MB/s 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.119 256+0 records in 00:07:15.119 256+0 records out 00:07:15.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155944 s, 67.2 MB/s 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.119 256+0 records in 00:07:15.119 256+0 records out 00:07:15.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0384361 s, 27.3 MB/s 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.119 16:46:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.380 16:46:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.644 16:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.971 16:46:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.971 16:46:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.971 16:46:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:16.232 [2024-07-25 16:46:36.257683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.232 [2024-07-25 16:46:36.321260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.232 [2024-07-25 16:46:36.321273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.232 [2024-07-25 16:46:36.352639] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:16.232 [2024-07-25 16:46:36.352679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:19.536 16:46:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1224905 /var/tmp/spdk-nbd.sock 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1224905 ']' 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:19.536 16:46:39 event.app_repeat -- event/event.sh@39 -- # killprocess 1224905 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1224905 ']' 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1224905 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1224905 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1224905' 00:07:19.536 killing process with pid 1224905 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1224905 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1224905 00:07:19.536 spdk_app_start is called in Round 0. 00:07:19.536 Shutdown signal received, stop current app iteration 00:07:19.536 Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 reinitialization... 00:07:19.536 spdk_app_start is called in Round 1. 00:07:19.536 Shutdown signal received, stop current app iteration 00:07:19.536 Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 reinitialization... 00:07:19.536 spdk_app_start is called in Round 2. 00:07:19.536 Shutdown signal received, stop current app iteration 00:07:19.536 Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 reinitialization... 00:07:19.536 spdk_app_start is called in Round 3. 00:07:19.536 Shutdown signal received, stop current app iteration 00:07:19.536 16:46:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:19.536 16:46:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:19.536 00:07:19.536 real 0m15.750s 00:07:19.536 user 0m33.941s 00:07:19.536 sys 0m2.134s 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.536 16:46:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.536 ************************************ 00:07:19.536 END TEST app_repeat 00:07:19.536 ************************************ 00:07:19.536 16:46:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:19.536 16:46:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:19.536 16:46:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.536 16:46:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.536 16:46:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.536 ************************************ 00:07:19.536 START TEST cpu_locks 00:07:19.536 ************************************ 00:07:19.536 16:46:39 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:19.536 * Looking for test storage... 00:07:19.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:19.536 16:46:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:19.536 16:46:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:19.536 16:46:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:19.536 16:46:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:19.536 16:46:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.536 16:46:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.536 16:46:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.536 ************************************ 00:07:19.536 START TEST default_locks 00:07:19.536 ************************************ 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1228166 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1228166 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1228166 ']' 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.536 16:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.536 [2024-07-25 16:46:39.720933] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:19.536 [2024-07-25 16:46:39.720971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228166 ] 00:07:19.536 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.536 [2024-07-25 16:46:39.772976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.797 [2024-07-25 16:46:39.838846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.368 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.368 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:20.368 16:46:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1228166 00:07:20.368 16:46:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1228166 00:07:20.368 16:46:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.629 lslocks: write error 00:07:20.629 16:46:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1228166 00:07:20.629 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1228166 ']' 00:07:20.629 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1228166 00:07:20.629 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.629 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.629 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1228166 00:07:20.891 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.891 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.891 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1228166' 00:07:20.891 killing process with pid 1228166 00:07:20.891 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1228166 00:07:20.892 16:46:40 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1228166 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1228166 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1228166 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1228166 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1228166 ']' 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1228166) - No such process 00:07:20.892 ERROR: process (pid: 1228166) is no longer running 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.892 00:07:20.892 real 0m1.467s 00:07:20.892 user 0m1.542s 00:07:20.892 sys 0m0.498s 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.892 16:46:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.892 ************************************ 00:07:20.892 END TEST default_locks 00:07:20.892 ************************************ 00:07:21.153 16:46:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:21.153 16:46:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.153 16:46:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.153 16:46:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.154 ************************************ 00:07:21.154 START TEST default_locks_via_rpc 00:07:21.154 ************************************ 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1228529 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1228529 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1228529 ']' 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.154 16:46:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.154 [2024-07-25 16:46:41.270178] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:21.154 [2024-07-25 16:46:41.270242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228529 ] 00:07:21.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.154 [2024-07-25 16:46:41.329759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.154 [2024-07-25 16:46:41.398017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1228529 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1228529 00:07:22.097 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1228529 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1228529 ']' 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1228529 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1228529 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1228529' 00:07:22.358 killing process with pid 1228529 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1228529 00:07:22.358 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1228529 00:07:22.620 00:07:22.620 real 0m1.500s 00:07:22.620 user 0m1.578s 00:07:22.620 sys 0m0.512s 00:07:22.620 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.620 16:46:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.620 ************************************ 00:07:22.620 END TEST default_locks_via_rpc 00:07:22.620 ************************************ 00:07:22.620 16:46:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:22.620 16:46:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.620 16:46:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.620 16:46:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.620 ************************************ 00:07:22.620 START TEST non_locking_app_on_locked_coremask 00:07:22.620 ************************************ 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1228895 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1228895 /var/tmp/spdk.sock 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1228895 ']' 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.620 16:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.620 [2024-07-25 16:46:42.825985] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:22.620 [2024-07-25 16:46:42.826035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228895 ] 00:07:22.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.620 [2024-07-25 16:46:42.885447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.881 [2024-07-25 16:46:42.953181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1229146 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1229146 /var/tmp/spdk2.sock 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1229146 ']' 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.453 16:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:23.453 [2024-07-25 16:46:43.636212] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:23.453 [2024-07-25 16:46:43.636272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229146 ] 00:07:23.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.453 [2024-07-25 16:46:43.723199] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.454 [2024-07-25 16:46:43.723231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.715 [2024-07-25 16:46:43.852514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.288 16:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.288 16:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:24.288 16:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1228895 00:07:24.288 16:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1228895 00:07:24.288 16:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.860 lslocks: write error 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1228895 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1228895 ']' 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1228895 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1228895 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1228895' 00:07:24.860 killing process with pid 1228895 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1228895 00:07:24.860 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1228895 00:07:25.432 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1229146 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1229146 ']' 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1229146 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229146 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229146' 00:07:25.433 killing process with pid 1229146 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1229146 00:07:25.433 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1229146 00:07:25.693 00:07:25.693 real 0m2.971s 00:07:25.693 user 0m3.239s 00:07:25.693 sys 0m0.889s 00:07:25.693 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.693 16:46:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.693 ************************************ 00:07:25.693 END TEST non_locking_app_on_locked_coremask 00:07:25.693 ************************************ 00:07:25.693 16:46:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:25.693 16:46:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.693 16:46:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.693 16:46:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.693 ************************************ 00:07:25.693 START TEST locking_app_on_unlocked_coremask 00:07:25.693 ************************************ 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1229601 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1229601 /var/tmp/spdk.sock 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1229601 ']' 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.693 16:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.693 [2024-07-25 16:46:45.870597] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:25.693 [2024-07-25 16:46:45.870654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229601 ] 00:07:25.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.693 [2024-07-25 16:46:45.929788] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.693 [2024-07-25 16:46:45.929815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.953 [2024-07-25 16:46:45.997003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1229690 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1229690 /var/tmp/spdk2.sock 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1229690 ']' 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.525 16:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.525 [2024-07-25 16:46:46.663068] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:26.525 [2024-07-25 16:46:46.663119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229690 ] 00:07:26.525 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.525 [2024-07-25 16:46:46.749714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.786 [2024-07-25 16:46:46.879189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.359 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.359 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:27.359 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1229690 00:07:27.359 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1229690 00:07:27.359 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.620 lslocks: write error 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1229601 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1229601 ']' 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1229601 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229601 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229601' 00:07:27.620 killing process with pid 1229601 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1229601 00:07:27.620 16:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1229601 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1229690 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1229690 ']' 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1229690 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229690 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229690' 00:07:28.192 killing process with pid 1229690 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1229690 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1229690 00:07:28.192 00:07:28.192 real 0m2.606s 00:07:28.192 user 0m2.842s 00:07:28.192 sys 0m0.749s 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.192 16:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.192 ************************************ 00:07:28.192 END TEST locking_app_on_unlocked_coremask 00:07:28.192 ************************************ 00:07:28.192 16:46:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:28.192 16:46:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.192 16:46:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.192 16:46:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.454 ************************************ 00:07:28.454 START TEST locking_app_on_locked_coremask 00:07:28.454 ************************************ 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1230154 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1230154 /var/tmp/spdk.sock 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1230154 ']' 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.454 16:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.454 [2024-07-25 16:46:48.547739] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:28.454 [2024-07-25 16:46:48.547795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230154 ] 00:07:28.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.454 [2024-07-25 16:46:48.610222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.454 [2024-07-25 16:46:48.684434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1230317 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1230317 /var/tmp/spdk2.sock 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1230317 /var/tmp/spdk2.sock 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1230317 /var/tmp/spdk2.sock 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1230317 ']' 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.398 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.398 [2024-07-25 16:46:49.359205] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:29.398 [2024-07-25 16:46:49.359257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230317 ] 00:07:29.398 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.398 [2024-07-25 16:46:49.448106] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1230154 has claimed it. 00:07:29.398 [2024-07-25 16:46:49.448144] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1230317) - No such process 00:07:29.971 ERROR: process (pid: 1230317) is no longer running 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1230154 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.971 16:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1230154 00:07:30.232 lslocks: write error 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1230154 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1230154 ']' 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1230154 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1230154 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1230154' 00:07:30.232 killing process with pid 1230154 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1230154 00:07:30.232 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1230154 00:07:30.501 00:07:30.501 real 0m2.150s 00:07:30.501 user 0m2.359s 00:07:30.501 sys 0m0.590s 00:07:30.501 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.501 16:46:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.501 ************************************ 00:07:30.501 END TEST locking_app_on_locked_coremask 00:07:30.501 ************************************ 00:07:30.501 16:46:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:30.501 16:46:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.501 16:46:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.501 16:46:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.501 ************************************ 00:07:30.501 START TEST locking_overlapped_coremask 00:07:30.501 ************************************ 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1230683 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1230683 /var/tmp/spdk.sock 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1230683 ']' 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.501 16:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.768 [2024-07-25 16:46:50.776315] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:30.768 [2024-07-25 16:46:50.776373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230683 ] 00:07:30.768 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.768 [2024-07-25 16:46:50.836401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.768 [2024-07-25 16:46:50.902224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.768 [2024-07-25 16:46:50.902319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.768 [2024-07-25 16:46:50.902484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1230709 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1230709 /var/tmp/spdk2.sock 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1230709 /var/tmp/spdk2.sock 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1230709 /var/tmp/spdk2.sock 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1230709 ']' 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.341 16:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.341 [2024-07-25 16:46:51.600012] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:31.341 [2024-07-25 16:46:51.600065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230709 ] 00:07:31.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.603 [2024-07-25 16:46:51.670939] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1230683 has claimed it. 00:07:31.603 [2024-07-25 16:46:51.670978] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:32.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1230709) - No such process 00:07:32.175 ERROR: process (pid: 1230709) is no longer running 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1230683 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1230683 ']' 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1230683 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1230683 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1230683' 00:07:32.175 killing process with pid 1230683 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1230683 00:07:32.175 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1230683 00:07:32.437 00:07:32.437 real 0m1.751s 00:07:32.437 user 0m4.980s 00:07:32.437 sys 0m0.352s 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.437 ************************************ 00:07:32.437 END TEST locking_overlapped_coremask 00:07:32.437 ************************************ 00:07:32.437 16:46:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:32.437 16:46:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.437 16:46:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.437 16:46:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.437 ************************************ 00:07:32.437 START TEST locking_overlapped_coremask_via_rpc 00:07:32.437 ************************************ 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1231051 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1231051 /var/tmp/spdk.sock 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1231051 ']' 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.437 16:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.437 [2024-07-25 16:46:52.601617] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:32.437 [2024-07-25 16:46:52.601662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231051 ] 00:07:32.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.437 [2024-07-25 16:46:52.660466] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.437 [2024-07-25 16:46:52.660496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.698 [2024-07-25 16:46:52.725494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.698 [2024-07-25 16:46:52.725611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.698 [2024-07-25 16:46:52.725613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1231121 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1231121 /var/tmp/spdk2.sock 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1231121 ']' 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.270 16:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.270 [2024-07-25 16:46:53.423379] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:33.270 [2024-07-25 16:46:53.423433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231121 ] 00:07:33.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.270 [2024-07-25 16:46:53.493935] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.270 [2024-07-25 16:46:53.493962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.531 [2024-07-25 16:46:53.604252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.531 [2024-07-25 16:46:53.604363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.531 [2024-07-25 16:46:53.604365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.104 [2024-07-25 16:46:54.196267] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1231051 has claimed it. 00:07:34.104 request: 00:07:34.104 { 00:07:34.104 "method": "framework_enable_cpumask_locks", 00:07:34.104 "req_id": 1 00:07:34.104 } 00:07:34.104 Got JSON-RPC error response 00:07:34.104 response: 00:07:34.104 { 00:07:34.104 "code": -32603, 00:07:34.104 "message": "Failed to claim CPU core: 2" 00:07:34.104 } 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1231051 /var/tmp/spdk.sock 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1231051 ']' 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.104 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1231121 /var/tmp/spdk2.sock 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1231121 ']' 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:34.366 00:07:34.366 real 0m2.002s 00:07:34.366 user 0m0.775s 00:07:34.366 sys 0m0.153s 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.366 16:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.366 ************************************ 00:07:34.366 END TEST locking_overlapped_coremask_via_rpc 00:07:34.366 ************************************ 00:07:34.366 16:46:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:34.366 16:46:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1231051 ]] 00:07:34.366 16:46:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1231051 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1231051 ']' 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1231051 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1231051 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1231051' 00:07:34.366 killing process with pid 1231051 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1231051 00:07:34.366 16:46:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1231051 00:07:34.657 16:46:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1231121 ]] 00:07:34.657 16:46:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1231121 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1231121 ']' 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1231121 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1231121 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1231121' 00:07:34.657 killing process with pid 1231121 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1231121 00:07:34.657 16:46:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1231121 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1231051 ]] 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1231051 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1231051 ']' 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1231051 00:07:34.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1231051) - No such process 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1231051 is not found' 00:07:34.918 Process with pid 1231051 is not found 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1231121 ]] 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1231121 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1231121 ']' 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1231121 00:07:34.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1231121) - No such process 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1231121 is not found' 00:07:34.918 Process with pid 1231121 is not found 00:07:34.918 16:46:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.918 00:07:34.918 real 0m15.571s 00:07:34.918 user 0m26.910s 00:07:34.918 sys 0m4.590s 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.918 16:46:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.918 ************************************ 00:07:34.918 END TEST cpu_locks 00:07:34.918 ************************************ 00:07:34.918 00:07:34.918 real 0m41.224s 00:07:34.918 user 1m20.179s 00:07:34.918 sys 0m7.717s 00:07:34.918 16:46:55 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.918 16:46:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.918 ************************************ 00:07:34.918 END TEST event 00:07:34.918 ************************************ 00:07:34.918 16:46:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.918 16:46:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.918 16:46:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.918 16:46:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.181 ************************************ 00:07:35.181 START TEST thread 00:07:35.181 ************************************ 00:07:35.181 16:46:55 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:35.181 * Looking for test storage... 00:07:35.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:35.181 16:46:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:35.181 16:46:55 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:35.181 16:46:55 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.181 16:46:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.181 ************************************ 00:07:35.181 START TEST thread_poller_perf 00:07:35.181 ************************************ 00:07:35.181 16:46:55 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:35.181 [2024-07-25 16:46:55.392478] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:35.181 [2024-07-25 16:46:55.392579] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231700 ] 00:07:35.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.442 [2024-07-25 16:46:55.460540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.442 [2024-07-25 16:46:55.536136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.442 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:36.386 ====================================== 00:07:36.386 busy:2406582354 (cyc) 00:07:36.386 total_run_count: 287000 00:07:36.386 tsc_hz: 2400000000 (cyc) 00:07:36.386 ====================================== 00:07:36.386 poller_cost: 8385 (cyc), 3493 (nsec) 00:07:36.386 00:07:36.386 real 0m1.226s 00:07:36.386 user 0m1.144s 00:07:36.386 sys 0m0.078s 00:07:36.386 16:46:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.386 16:46:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.386 ************************************ 00:07:36.386 END TEST thread_poller_perf 00:07:36.386 ************************************ 00:07:36.386 16:46:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.386 16:46:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:36.386 16:46:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.386 16:46:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.648 ************************************ 00:07:36.648 START TEST thread_poller_perf 00:07:36.648 ************************************ 00:07:36.648 16:46:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.648 [2024-07-25 16:46:56.691387] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:36.648 [2024-07-25 16:46:56.691489] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231871 ] 00:07:36.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.648 [2024-07-25 16:46:56.754216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.648 [2024-07-25 16:46:56.819714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.648 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:38.034 ====================================== 00:07:38.034 busy:2401911336 (cyc) 00:07:38.034 total_run_count: 3809000 00:07:38.034 tsc_hz: 2400000000 (cyc) 00:07:38.034 ====================================== 00:07:38.034 poller_cost: 630 (cyc), 262 (nsec) 00:07:38.034 00:07:38.034 real 0m1.203s 00:07:38.034 user 0m1.130s 00:07:38.034 sys 0m0.069s 00:07:38.034 16:46:57 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.034 16:46:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:38.034 ************************************ 00:07:38.034 END TEST thread_poller_perf 00:07:38.034 ************************************ 00:07:38.034 16:46:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:38.034 00:07:38.034 real 0m2.685s 00:07:38.034 user 0m2.368s 00:07:38.034 sys 0m0.326s 00:07:38.034 16:46:57 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.034 16:46:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.034 ************************************ 00:07:38.034 END TEST thread 00:07:38.034 ************************************ 00:07:38.034 16:46:57 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:38.034 16:46:57 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.034 16:46:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.034 16:46:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.034 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.034 ************************************ 00:07:38.034 START TEST app_cmdline 00:07:38.034 ************************************ 00:07:38.034 16:46:57 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.034 * Looking for test storage... 00:07:38.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.034 16:46:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.034 16:46:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1232254 00:07:38.034 16:46:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1232254 00:07:38.034 16:46:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.034 16:46:58 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1232254 ']' 00:07:38.034 16:46:58 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.034 16:46:58 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.034 16:46:58 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.034 16:46:58 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.034 16:46:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.034 [2024-07-25 16:46:58.147656] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:38.034 [2024-07-25 16:46:58.147723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232254 ] 00:07:38.034 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.034 [2024-07-25 16:46:58.213318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.034 [2024-07-25 16:46:58.287195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.974 16:46:58 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.974 16:46:58 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:38.974 16:46:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:38.974 { 00:07:38.974 "version": "SPDK v24.09-pre git sha1 7b27bb4a4", 00:07:38.974 "fields": { 00:07:38.975 "major": 24, 00:07:38.975 "minor": 9, 00:07:38.975 "patch": 0, 00:07:38.975 "suffix": "-pre", 00:07:38.975 "commit": "7b27bb4a4" 00:07:38.975 } 00:07:38.975 } 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.975 16:46:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.975 16:46:59 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.235 request: 00:07:39.235 { 00:07:39.235 "method": "env_dpdk_get_mem_stats", 00:07:39.235 "req_id": 1 00:07:39.235 } 00:07:39.235 Got JSON-RPC error response 00:07:39.235 response: 00:07:39.235 { 00:07:39.235 "code": -32601, 00:07:39.235 "message": "Method not found" 00:07:39.235 } 00:07:39.235 16:46:59 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:39.235 16:46:59 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.235 16:46:59 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:39.235 16:46:59 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.236 16:46:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1232254 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1232254 ']' 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1232254 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1232254 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1232254' 00:07:39.236 killing process with pid 1232254 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@969 -- # kill 1232254 00:07:39.236 16:46:59 app_cmdline -- common/autotest_common.sh@974 -- # wait 1232254 00:07:39.497 00:07:39.497 real 0m1.590s 00:07:39.497 user 0m1.913s 00:07:39.497 sys 0m0.413s 00:07:39.497 16:46:59 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.497 16:46:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.497 ************************************ 00:07:39.497 END TEST app_cmdline 00:07:39.497 ************************************ 00:07:39.497 16:46:59 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.497 16:46:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.497 16:46:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.497 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.497 ************************************ 00:07:39.497 START TEST version 00:07:39.497 ************************************ 00:07:39.497 16:46:59 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.497 * Looking for test storage... 00:07:39.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:39.497 16:46:59 version -- app/version.sh@17 -- # get_header_version major 00:07:39.497 16:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.497 16:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:39.497 16:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.497 16:46:59 version -- app/version.sh@17 -- # major=24 00:07:39.497 16:46:59 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.497 16:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.497 16:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.497 16:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:39.497 16:46:59 version -- app/version.sh@18 -- # minor=9 00:07:39.758 16:46:59 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.758 16:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.758 16:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:39.758 16:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.758 16:46:59 version -- app/version.sh@19 -- # patch=0 00:07:39.758 16:46:59 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.758 16:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.758 16:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:39.758 16:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.758 16:46:59 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.758 16:46:59 version -- app/version.sh@22 -- # version=24.9 00:07:39.758 16:46:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.758 16:46:59 version -- app/version.sh@28 -- # version=24.9rc0 00:07:39.758 16:46:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:39.758 16:46:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.758 16:46:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:39.758 16:46:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:39.758 00:07:39.758 real 0m0.179s 00:07:39.758 user 0m0.088s 00:07:39.758 sys 0m0.133s 00:07:39.758 16:46:59 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.758 16:46:59 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.758 ************************************ 00:07:39.758 END TEST version 00:07:39.758 ************************************ 00:07:39.758 16:46:59 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:39.758 16:46:59 -- spdk/autotest.sh@202 -- # uname -s 00:07:39.758 16:46:59 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:39.758 16:46:59 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:39.758 16:46:59 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:39.758 16:46:59 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:39.758 16:46:59 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:39.758 16:46:59 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:39.758 16:46:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.759 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.759 16:46:59 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:39.759 16:46:59 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:39.759 16:46:59 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:39.759 16:46:59 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:39.759 16:46:59 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:39.759 16:46:59 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:39.759 16:46:59 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.759 16:46:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.759 16:46:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.759 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.759 ************************************ 00:07:39.759 START TEST nvmf_tcp 00:07:39.759 ************************************ 00:07:39.759 16:46:59 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:40.020 * Looking for test storage... 00:07:40.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:40.020 16:47:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:40.020 16:47:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.020 16:47:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:40.020 16:47:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.021 16:47:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.021 16:47:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 ************************************ 00:07:40.021 START TEST nvmf_target_core 00:07:40.021 ************************************ 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:40.021 * Looking for test storage... 00:07:40.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 ************************************ 00:07:40.021 START TEST nvmf_abort 00:07:40.021 ************************************ 00:07:40.021 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:40.282 * Looking for test storage... 00:07:40.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.282 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.283 16:47:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:48.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:48.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:48.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:48.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:07:48.429 00:07:48.429 --- 10.0.0.2 ping statistics --- 00:07:48.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.429 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.492 ms 00:07:48.429 00:07:48.429 --- 10.0.0.1 ping statistics --- 00:07:48.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.429 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:48.429 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1236695 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1236695 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1236695 ']' 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.430 16:47:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 [2024-07-25 16:47:07.768030] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:07:48.430 [2024-07-25 16:47:07.768096] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.430 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.430 [2024-07-25 16:47:07.857717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.430 [2024-07-25 16:47:07.953595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.430 [2024-07-25 16:47:07.953657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.430 [2024-07-25 16:47:07.953665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.430 [2024-07-25 16:47:07.953672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.430 [2024-07-25 16:47:07.953678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.430 [2024-07-25 16:47:07.953817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.430 [2024-07-25 16:47:07.953983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.430 [2024-07-25 16:47:07.953984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 [2024-07-25 16:47:08.599620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 Malloc0 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 Delay0 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 [2024-07-25 16:47:08.677198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.430 16:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.691 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.691 [2024-07-25 16:47:08.798442] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:51.238 Initializing NVMe Controllers 00:07:51.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.238 controller IO queue size 128 less than required 00:07:51.238 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:51.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:51.238 Initialization complete. Launching workers. 00:07:51.238 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 27406 00:07:51.238 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27468, failed to submit 62 00:07:51.238 success 27410, unsuccess 58, failed 0 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.238 16:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.238 rmmod nvme_tcp 00:07:51.238 rmmod nvme_fabrics 00:07:51.238 rmmod nvme_keyring 00:07:51.238 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.238 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:51.238 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:51.238 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1236695 ']' 00:07:51.238 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1236695 00:07:51.238 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1236695 ']' 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1236695 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1236695 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1236695' 00:07:51.239 killing process with pid 1236695 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1236695 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1236695 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.239 16:47:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.154 00:07:53.154 real 0m13.035s 00:07:53.154 user 0m13.648s 00:07:53.154 sys 0m6.493s 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.154 ************************************ 00:07:53.154 END TEST nvmf_abort 00:07:53.154 ************************************ 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.154 ************************************ 00:07:53.154 START TEST nvmf_ns_hotplug_stress 00:07:53.154 ************************************ 00:07:53.154 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.415 * Looking for test storage... 00:07:53.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.415 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.415 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:53.415 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.415 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.415 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.415 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.416 16:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.558 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:01.559 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:01.559 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:01.559 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:01.559 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:08:01.559 00:08:01.559 --- 10.0.0.2 ping statistics --- 00:08:01.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.559 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:08:01.559 00:08:01.559 --- 10.0.0.1 ping statistics --- 00:08:01.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.559 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1241410 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1241410 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1241410 ']' 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.559 16:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.560 [2024-07-25 16:47:20.712013] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:08:01.560 [2024-07-25 16:47:20.712061] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.560 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.560 [2024-07-25 16:47:20.795130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.560 [2024-07-25 16:47:20.872881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.560 [2024-07-25 16:47:20.872938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.560 [2024-07-25 16:47:20.872945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.560 [2024-07-25 16:47:20.872952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.560 [2024-07-25 16:47:20.872958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.560 [2024-07-25 16:47:20.873131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.560 [2024-07-25 16:47:20.873275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.560 [2024-07-25 16:47:20.873468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.560 [2024-07-25 16:47:21.666072] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.560 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:01.820 16:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.820 [2024-07-25 16:47:22.009167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.820 16:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.081 16:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:02.081 Malloc0 00:08:02.340 16:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:02.340 Delay0 00:08:02.340 16:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.601 16:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:02.601 NULL1 00:08:02.601 16:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:02.864 16:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1242022 00:08:02.864 16:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:02.864 16:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:02.864 16:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.864 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.331 Read completed with error (sct=0, sc=11) 00:08:04.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.331 16:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.331 16:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:04.331 16:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:04.331 true 00:08:04.331 16:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:04.331 16:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.271 16:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.271 16:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:05.271 16:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:05.531 true 00:08:05.531 16:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:05.531 16:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.810 16:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.810 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:05.810 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:06.072 true 00:08:06.072 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:06.072 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.332 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.332 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:06.332 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:06.593 true 00:08:06.593 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:06.593 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.853 16:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.853 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:06.853 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:07.128 true 00:08:07.128 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:07.128 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.128 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.388 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:07.388 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:07.650 true 00:08:07.650 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:07.650 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.650 16:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.911 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:07.911 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:08.172 true 00:08:08.172 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:08.172 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.172 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.432 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:08.432 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:08.432 true 00:08:08.693 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:08.693 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.693 16:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.954 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:08.954 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:08.954 true 00:08:08.954 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:08.954 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.215 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.476 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:09.476 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:09.476 true 00:08:09.476 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:09.476 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.738 16:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.000 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:10.000 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:10.000 true 00:08:10.000 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:10.000 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.261 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.521 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:10.521 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:10.521 true 00:08:10.521 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:10.521 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.781 16:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.043 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:11.043 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:11.043 true 00:08:11.043 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:11.043 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.304 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.565 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:11.565 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:11.565 true 00:08:11.565 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:11.565 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.826 16:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.826 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:11.826 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:12.087 true 00:08:12.087 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:12.087 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.348 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.348 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:12.348 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:12.610 true 00:08:12.610 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:12.610 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.871 16:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.871 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:12.871 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:13.133 true 00:08:13.133 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:13.133 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.393 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.393 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:13.393 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:13.653 true 00:08:13.653 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:13.653 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.914 16:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.914 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:13.914 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:14.175 true 00:08:14.175 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:14.175 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.175 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.437 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:14.437 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:14.697 true 00:08:14.697 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:14.697 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.697 16:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.958 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:14.958 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:15.219 true 00:08:15.219 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:15.219 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.219 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.480 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:15.480 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:15.742 true 00:08:15.742 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:15.742 16:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.686 16:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.686 16:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:16.686 16:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:16.686 true 00:08:16.947 16:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:16.947 16:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.947 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.209 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:17.209 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:17.209 true 00:08:17.209 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:17.209 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.470 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.731 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:17.731 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:17.731 true 00:08:17.731 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:17.731 16:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.993 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.254 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:18.254 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:18.254 true 00:08:18.254 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:18.254 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.515 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.775 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:18.775 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:18.775 true 00:08:18.775 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:18.775 16:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.052 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.052 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:19.052 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:19.313 true 00:08:19.313 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:19.313 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.575 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.575 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:19.575 16:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:19.887 true 00:08:19.887 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:19.887 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.165 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.165 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:20.165 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:20.426 true 00:08:20.426 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:20.426 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.426 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.687 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:20.687 16:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:20.949 true 00:08:20.949 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:20.949 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.949 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.211 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:21.211 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:21.472 true 00:08:21.472 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:21.472 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.472 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.734 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:21.734 16:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:21.734 true 00:08:21.995 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:21.995 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.995 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.256 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:22.256 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:22.256 true 00:08:22.256 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:22.256 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.516 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.778 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:22.778 16:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:22.778 true 00:08:22.778 16:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:22.778 16:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.721 16:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.982 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:23.982 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:24.242 true 00:08:24.242 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:24.242 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.242 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.504 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:24.504 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:24.765 true 00:08:24.765 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:24.765 16:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.765 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.027 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:25.027 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:25.288 true 00:08:25.288 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:25.288 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.288 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.549 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:25.549 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:25.810 true 00:08:25.810 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:25.810 16:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.810 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.071 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:26.071 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:26.071 true 00:08:26.333 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:26.333 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.333 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.593 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:26.594 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:26.594 true 00:08:26.854 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:26.854 16:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.854 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.116 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:27.116 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:27.116 true 00:08:27.116 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:27.116 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.377 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.639 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:27.639 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:27.639 true 00:08:27.639 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:27.639 16:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.901 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.162 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:28.162 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:28.162 true 00:08:28.162 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:28.162 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.424 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.685 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:28.685 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:28.685 true 00:08:28.685 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:28.685 16:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.946 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.207 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:29.207 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:29.207 true 00:08:29.207 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:29.207 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.469 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.731 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:29.731 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:29.731 true 00:08:29.731 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:29.731 16:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.992 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.992 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:29.992 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:30.253 true 00:08:30.253 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:30.253 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.514 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.514 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:30.514 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:30.775 true 00:08:30.775 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:30.775 16:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.037 16:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.037 16:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:31.037 16:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:31.298 true 00:08:31.298 16:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:31.298 16:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.241 16:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.502 16:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:32.502 16:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:32.502 true 00:08:32.502 16:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:32.502 16:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.763 16:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.024 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:33.024 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:33.024 true 00:08:33.024 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:33.024 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.288 Initializing NVMe Controllers 00:08:33.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.289 Controller IO queue size 128, less than required. 00:08:33.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.289 Controller IO queue size 128, less than required. 00:08:33.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:33.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:33.289 Initialization complete. Launching workers. 00:08:33.289 ======================================================== 00:08:33.289 Latency(us) 00:08:33.289 Device Information : IOPS MiB/s Average min max 00:08:33.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 520.17 0.25 46997.78 2368.32 1108427.90 00:08:33.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6153.38 3.00 20803.32 2422.06 496194.26 00:08:33.289 ======================================================== 00:08:33.289 Total : 6673.55 3.26 22845.06 2368.32 1108427.90 00:08:33.289 00:08:33.289 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.550 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:33.550 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:33.550 true 00:08:33.550 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1242022 00:08:33.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1242022) - No such process 00:08:33.550 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1242022 00:08:33.550 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.810 16:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.810 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:33.810 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:33.810 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:33.810 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.810 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:34.071 null0 00:08:34.071 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.071 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.071 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:34.332 null1 00:08:34.332 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.332 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.332 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:34.332 null2 00:08:34.332 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.332 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.332 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:34.592 null3 00:08:34.593 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.593 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.593 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:34.852 null4 00:08:34.852 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.852 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.852 16:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:34.852 null5 00:08:34.852 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.852 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.852 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:35.112 null6 00:08:35.112 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.112 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.112 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:35.112 null7 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.372 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1248637 1248639 1248641 1248643 1248645 1248647 1248648 1248650 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.373 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.634 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.895 16:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.156 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.157 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.453 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.454 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.717 16:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.978 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.239 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.500 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.501 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.762 16:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.762 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.023 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.024 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.286 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.548 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.549 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.549 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.549 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.549 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.549 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.810 16:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.810 rmmod nvme_tcp 00:08:38.810 rmmod nvme_fabrics 00:08:38.810 rmmod nvme_keyring 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1241410 ']' 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1241410 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1241410 ']' 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1241410 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241410 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241410' 00:08:38.810 killing process with pid 1241410 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1241410 00:08:38.810 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1241410 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.072 16:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.988 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.250 00:08:41.250 real 0m47.864s 00:08:41.250 user 3m12.932s 00:08:41.250 sys 0m15.552s 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.250 ************************************ 00:08:41.250 END TEST nvmf_ns_hotplug_stress 00:08:41.250 ************************************ 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.250 ************************************ 00:08:41.250 START TEST nvmf_delete_subsystem 00:08:41.250 ************************************ 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:41.250 * Looking for test storage... 00:08:41.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.250 16:48:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.400 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:49.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:49.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:49.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:49.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:08:49.401 00:08:49.401 --- 10.0.0.2 ping statistics --- 00:08:49.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.401 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.450 ms 00:08:49.401 00:08:49.401 --- 10.0.0.1 ping statistics --- 00:08:49.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.401 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:49.401 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1253908 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1253908 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1253908 ']' 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.402 16:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 [2024-07-25 16:48:08.777653] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:08:49.402 [2024-07-25 16:48:08.777714] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.402 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.402 [2024-07-25 16:48:08.849655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.402 [2024-07-25 16:48:08.924415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.402 [2024-07-25 16:48:08.924456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.402 [2024-07-25 16:48:08.924463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.402 [2024-07-25 16:48:08.924470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.402 [2024-07-25 16:48:08.924476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.402 [2024-07-25 16:48:08.924615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.402 [2024-07-25 16:48:08.924617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 [2024-07-25 16:48:09.596290] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 [2024-07-25 16:48:09.612471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 NULL1 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 Delay0 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1254179 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:49.402 16:48:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:49.664 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.664 [2024-07-25 16:48:09.697159] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:51.581 16:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.581 16:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.581 16:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 starting I/O failed: -6 00:08:51.843 [2024-07-25 16:48:11.924431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf31000 is same with the state(5) to be set 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Write completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.843 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 [2024-07-25 16:48:11.925344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf31710 is same with the state(5) to be set 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Write completed with error (sct=0, sc=8) 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 Read completed with error (sct=0, sc=8) 00:08:51.844 starting I/O failed: -6 00:08:51.844 starting I/O failed: -6 00:08:52.789 [2024-07-25 16:48:12.879401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32ac0 is same with the state(5) to be set 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 [2024-07-25 16:48:12.927858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf313e0 is same with the state(5) to be set 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 [2024-07-25 16:48:12.928731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0be000d000 is same with the state(5) to be set 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 [2024-07-25 16:48:12.928879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0be000d7a0 is same with the state(5) to be set 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Read completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 Write completed with error (sct=0, sc=8) 00:08:52.789 [2024-07-25 16:48:12.928971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf31a40 is same with the state(5) to be set 00:08:52.789 Initializing NVMe Controllers 00:08:52.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.789 Controller IO queue size 128, less than required. 00:08:52.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:52.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:52.789 Initialization complete. Launching workers. 00:08:52.789 ======================================================== 00:08:52.789 Latency(us) 00:08:52.789 Device Information : IOPS MiB/s Average min max 00:08:52.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.55 0.08 896073.00 926.46 1009995.08 00:08:52.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.51 0.09 951030.37 446.79 2002323.12 00:08:52.789 ======================================================== 00:08:52.789 Total : 346.06 0.17 924104.42 446.79 2002323.12 00:08:52.790 00:08:52.790 [2024-07-25 16:48:12.929632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32ac0 (9): Bad file descriptor 00:08:52.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:52.790 16:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.790 16:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:52.790 16:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1254179 00:08:52.790 16:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:53.362 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:53.362 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1254179 00:08:53.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1254179) - No such process 00:08:53.362 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1254179 00:08:53.362 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1254179 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1254179 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.363 [2024-07-25 16:48:13.460406] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1255236 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.363 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.363 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.363 [2024-07-25 16:48:13.528742] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:53.934 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.934 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:53.934 16:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.506 16:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.506 16:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:54.506 16:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.766 16:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.767 16:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:54.767 16:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.338 16:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.338 16:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:55.338 16:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.911 16:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.911 16:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:55.911 16:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.482 16:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.482 16:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:56.482 16:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.482 Initializing NVMe Controllers 00:08:56.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.482 Controller IO queue size 128, less than required. 00:08:56.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.482 Initialization complete. Launching workers. 00:08:56.482 ======================================================== 00:08:56.482 Latency(us) 00:08:56.482 Device Information : IOPS MiB/s Average min max 00:08:56.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002560.02 1000376.13 1008689.92 00:08:56.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003325.75 1000541.90 1009354.07 00:08:56.482 ======================================================== 00:08:56.482 Total : 256.00 0.12 1002942.89 1000376.13 1009354.07 00:08:56.482 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1255236 00:08:56.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1255236) - No such process 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1255236 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.743 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.003 rmmod nvme_tcp 00:08:57.003 rmmod nvme_fabrics 00:08:57.003 rmmod nvme_keyring 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1253908 ']' 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1253908 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1253908 ']' 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1253908 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253908 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253908' 00:08:57.003 killing process with pid 1253908 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1253908 00:08:57.003 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1253908 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.263 16:48:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.177 00:08:59.177 real 0m18.022s 00:08:59.177 user 0m31.069s 00:08:59.177 sys 0m6.209s 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.177 ************************************ 00:08:59.177 END TEST nvmf_delete_subsystem 00:08:59.177 ************************************ 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.177 ************************************ 00:08:59.177 START TEST nvmf_host_management 00:08:59.177 ************************************ 00:08:59.177 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:59.439 * Looking for test storage... 00:08:59.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.439 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.440 16:48:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.080 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.080 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.080 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.080 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.080 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.341 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:06.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:06.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:06.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:06.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.342 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:09:06.603 00:09:06.603 --- 10.0.0.2 ping statistics --- 00:09:06.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.603 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:09:06.603 00:09:06.603 --- 10.0.0.1 ping statistics --- 00:09:06.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.603 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1260092 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1260092 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1260092 ']' 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.603 16:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.603 [2024-07-25 16:48:26.741392] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:09:06.604 [2024-07-25 16:48:26.741456] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.604 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.604 [2024-07-25 16:48:26.829398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.864 [2024-07-25 16:48:26.924022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.864 [2024-07-25 16:48:26.924083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.864 [2024-07-25 16:48:26.924091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.864 [2024-07-25 16:48:26.924098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.864 [2024-07-25 16:48:26.924104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.864 [2024-07-25 16:48:26.924256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.864 [2024-07-25 16:48:26.924492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.864 [2024-07-25 16:48:26.924658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.864 [2024-07-25 16:48:26.924659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.436 [2024-07-25 16:48:27.579186] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.436 Malloc0 00:09:07.436 [2024-07-25 16:48:27.640041] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1260461 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1260461 /var/tmp/bdevperf.sock 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1260461 ']' 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:07.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:07.436 { 00:09:07.436 "params": { 00:09:07.436 "name": "Nvme$subsystem", 00:09:07.436 "trtype": "$TEST_TRANSPORT", 00:09:07.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.436 "adrfam": "ipv4", 00:09:07.436 "trsvcid": "$NVMF_PORT", 00:09:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.436 "hdgst": ${hdgst:-false}, 00:09:07.436 "ddgst": ${ddgst:-false} 00:09:07.436 }, 00:09:07.436 "method": "bdev_nvme_attach_controller" 00:09:07.436 } 00:09:07.436 EOF 00:09:07.436 )") 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:07.436 16:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:07.436 "params": { 00:09:07.436 "name": "Nvme0", 00:09:07.436 "trtype": "tcp", 00:09:07.436 "traddr": "10.0.0.2", 00:09:07.436 "adrfam": "ipv4", 00:09:07.436 "trsvcid": "4420", 00:09:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.436 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:07.437 "hdgst": false, 00:09:07.437 "ddgst": false 00:09:07.437 }, 00:09:07.437 "method": "bdev_nvme_attach_controller" 00:09:07.437 }' 00:09:07.697 [2024-07-25 16:48:27.740399] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:09:07.697 [2024-07-25 16:48:27.740451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260461 ] 00:09:07.697 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.698 [2024-07-25 16:48:27.799683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.698 [2024-07-25 16:48:27.864865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.958 Running I/O for 10 seconds... 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=385 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 385 -ge 100 ']' 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.533 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.533 [2024-07-25 16:48:28.587103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.533 [2024-07-25 16:48:28.587460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.587542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa22a0 is same with the state(5) to be set 00:09:08.534 [2024-07-25 16:48:28.588021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.534 [2024-07-25 16:48:28.588607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.534 [2024-07-25 16:48:28.588617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.588985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.588994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.535 [2024-07-25 16:48:28.589116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.535 [2024-07-25 16:48:28.589125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a664f0 is same with the state(5) to be set 00:09:08.535 [2024-07-25 16:48:28.589165] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a664f0 was disconnected and freed. reset controller. 00:09:08.535 [2024-07-25 16:48:28.590401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:08.535 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.535 task offset: 49152 on job bdev=Nvme0n1 fails 00:09:08.535 00:09:08.535 Latency(us) 00:09:08.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.535 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:08.535 Job: Nvme0n1 ended in about 0.41 seconds with error 00:09:08.535 Verification LBA range: start 0x0 length 0x400 00:09:08.535 Nvme0n1 : 0.41 929.50 58.09 154.92 0.00 57392.30 5734.40 59856.21 00:09:08.535 =================================================================================================================== 00:09:08.535 Total : 929.50 58.09 154.92 0.00 57392.30 5734.40 59856.21 00:09:08.535 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:08.535 [2024-07-25 16:48:28.592661] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.535 [2024-07-25 16:48:28.592684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16553b0 (9): Bad file descriptor 00:09:08.535 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.535 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.535 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.535 16:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:08.535 [2024-07-25 16:48:28.650412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:09.480 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1260461 00:09:09.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1260461) - No such process 00:09:09.480 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:09.480 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.481 { 00:09:09.481 "params": { 00:09:09.481 "name": "Nvme$subsystem", 00:09:09.481 "trtype": "$TEST_TRANSPORT", 00:09:09.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.481 "adrfam": "ipv4", 00:09:09.481 "trsvcid": "$NVMF_PORT", 00:09:09.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.481 "hdgst": ${hdgst:-false}, 00:09:09.481 "ddgst": ${ddgst:-false} 00:09:09.481 }, 00:09:09.481 "method": "bdev_nvme_attach_controller" 00:09:09.481 } 00:09:09.481 EOF 00:09:09.481 )") 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:09.481 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.481 "params": { 00:09:09.481 "name": "Nvme0", 00:09:09.481 "trtype": "tcp", 00:09:09.481 "traddr": "10.0.0.2", 00:09:09.481 "adrfam": "ipv4", 00:09:09.481 "trsvcid": "4420", 00:09:09.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:09.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:09.481 "hdgst": false, 00:09:09.481 "ddgst": false 00:09:09.481 }, 00:09:09.481 "method": "bdev_nvme_attach_controller" 00:09:09.481 }' 00:09:09.481 [2024-07-25 16:48:29.671944] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:09:09.481 [2024-07-25 16:48:29.671998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260808 ] 00:09:09.481 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.481 [2024-07-25 16:48:29.730629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.742 [2024-07-25 16:48:29.793630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.003 Running I/O for 1 seconds... 00:09:10.946 00:09:10.946 Latency(us) 00:09:10.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.946 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:10.946 Verification LBA range: start 0x0 length 0x400 00:09:10.946 Nvme0n1 : 1.03 1613.86 100.87 0.00 0.00 38976.38 7208.96 32549.55 00:09:10.946 =================================================================================================================== 00:09:10.946 Total : 1613.86 100.87 0.00 0.00 38976.38 7208.96 32549.55 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.207 rmmod nvme_tcp 00:09:11.207 rmmod nvme_fabrics 00:09:11.207 rmmod nvme_keyring 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1260092 ']' 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1260092 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1260092 ']' 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1260092 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1260092 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1260092' 00:09:11.207 killing process with pid 1260092 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1260092 00:09:11.207 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1260092 00:09:11.468 [2024-07-25 16:48:31.485102] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.468 16:48:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:13.384 00:09:13.384 real 0m14.145s 00:09:13.384 user 0m22.517s 00:09:13.384 sys 0m6.366s 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 ************************************ 00:09:13.384 END TEST nvmf_host_management 00:09:13.384 ************************************ 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.384 16:48:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.646 ************************************ 00:09:13.646 START TEST nvmf_lvol 00:09:13.646 ************************************ 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:13.646 * Looking for test storage... 00:09:13.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.646 16:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:20.241 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:20.241 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:20.241 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:20.241 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.241 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:09:20.503 00:09:20.503 --- 10.0.0.2 ping statistics --- 00:09:20.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.503 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:09:20.503 00:09:20.503 --- 10.0.0.1 ping statistics --- 00:09:20.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.503 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.503 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1265158 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1265158 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1265158 ']' 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.765 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 [2024-07-25 16:48:40.850356] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:09:20.765 [2024-07-25 16:48:40.850412] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.765 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.765 [2024-07-25 16:48:40.914520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.765 [2024-07-25 16:48:40.979145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.765 [2024-07-25 16:48:40.979184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.765 [2024-07-25 16:48:40.979191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.765 [2024-07-25 16:48:40.979198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.765 [2024-07-25 16:48:40.979210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.765 [2024-07-25 16:48:40.979276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.765 [2024-07-25 16:48:40.979431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.765 [2024-07-25 16:48:40.979435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.709 [2024-07-25 16:48:41.803539] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.709 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.970 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:21.970 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.970 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:21.970 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:22.231 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:22.492 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=34183d59-90dd-40ee-9559-779d0d2e1f2f 00:09:22.492 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34183d59-90dd-40ee-9559-779d0d2e1f2f lvol 20 00:09:22.492 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2f371fbf-43f2-423e-a123-9a7a5ff37fa1 00:09:22.492 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:22.753 16:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2f371fbf-43f2-423e-a123-9a7a5ff37fa1 00:09:23.014 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:23.014 [2024-07-25 16:48:43.213084] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.014 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.282 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1265857 00:09:23.282 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:23.282 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:23.282 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.227 16:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2f371fbf-43f2-423e-a123-9a7a5ff37fa1 MY_SNAPSHOT 00:09:24.488 16:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=48e52fc0-3b33-4181-8e30-1c4f87cf24ec 00:09:24.488 16:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2f371fbf-43f2-423e-a123-9a7a5ff37fa1 30 00:09:24.750 16:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 48e52fc0-3b33-4181-8e30-1c4f87cf24ec MY_CLONE 00:09:24.750 16:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8cadc3ab-07bb-465f-919e-c591a29fee93 00:09:24.750 16:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8cadc3ab-07bb-465f-919e-c591a29fee93 00:09:25.322 16:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1265857 00:09:33.538 Initializing NVMe Controllers 00:09:33.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:33.538 Controller IO queue size 128, less than required. 00:09:33.538 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:33.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:33.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:33.538 Initialization complete. Launching workers. 00:09:33.538 ======================================================== 00:09:33.538 Latency(us) 00:09:33.538 Device Information : IOPS MiB/s Average min max 00:09:33.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 18025.50 70.41 7102.49 1377.37 56264.13 00:09:33.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12370.80 48.32 10351.40 4000.30 52713.09 00:09:33.538 ======================================================== 00:09:33.538 Total : 30396.30 118.74 8424.74 1377.37 56264.13 00:09:33.538 00:09:33.538 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.798 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2f371fbf-43f2-423e-a123-9a7a5ff37fa1 00:09:33.799 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34183d59-90dd-40ee-9559-779d0d2e1f2f 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.060 rmmod nvme_tcp 00:09:34.060 rmmod nvme_fabrics 00:09:34.060 rmmod nvme_keyring 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1265158 ']' 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1265158 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1265158 ']' 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1265158 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1265158 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1265158' 00:09:34.060 killing process with pid 1265158 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1265158 00:09:34.060 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1265158 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.321 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.867 00:09:36.867 real 0m22.862s 00:09:36.867 user 1m3.325s 00:09:36.867 sys 0m7.679s 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:36.867 ************************************ 00:09:36.867 END TEST nvmf_lvol 00:09:36.867 ************************************ 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.867 ************************************ 00:09:36.867 START TEST nvmf_lvs_grow 00:09:36.867 ************************************ 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:36.867 * Looking for test storage... 00:09:36.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.867 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.868 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:43.459 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:43.459 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:43.459 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:43.459 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.459 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.460 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.780 ms 00:09:43.721 00:09:43.721 --- 10.0.0.2 ping statistics --- 00:09:43.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.721 rtt min/avg/max/mdev = 0.780/0.780/0.780/0.000 ms 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:43.721 00:09:43.721 --- 10.0.0.1 ping statistics --- 00:09:43.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.721 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1272205 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1272205 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1272205 ']' 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.721 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 [2024-07-25 16:49:04.030569] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:09:43.983 [2024-07-25 16:49:04.030637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.983 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.983 [2024-07-25 16:49:04.102199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.983 [2024-07-25 16:49:04.175241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.983 [2024-07-25 16:49:04.175280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.983 [2024-07-25 16:49:04.175290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.983 [2024-07-25 16:49:04.175296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.983 [2024-07-25 16:49:04.175302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.983 [2024-07-25 16:49:04.175319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.555 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.555 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:44.555 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.555 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.555 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:44.817 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.817 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:44.817 [2024-07-25 16:49:04.970608] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.817 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:44.817 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.817 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.817 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:44.817 ************************************ 00:09:44.817 START TEST lvs_grow_clean 00:09:44.817 ************************************ 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:44.817 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.078 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:45.078 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:45.339 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:45.339 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:45.339 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:45.339 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:45.339 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:45.339 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87f62ef8-6134-434a-89b0-d2aee1db1644 lvol 150 00:09:45.601 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=771285bc-2c5f-4caf-b987-f34650ed1c3a 00:09:45.601 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.601 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:45.601 [2024-07-25 16:49:05.834783] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:45.601 [2024-07-25 16:49:05.834837] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:45.601 true 00:09:45.601 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:45.601 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:45.862 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:45.862 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:46.123 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 771285bc-2c5f-4caf-b987-f34650ed1c3a 00:09:46.123 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.385 [2024-07-25 16:49:06.444689] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1272684 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1272684 /var/tmp/bdevperf.sock 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1272684 ']' 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.385 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:46.647 [2024-07-25 16:49:06.658991] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:09:46.647 [2024-07-25 16:49:06.659046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272684 ] 00:09:46.647 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.647 [2024-07-25 16:49:06.737659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.647 [2024-07-25 16:49:06.802688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.220 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.220 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:47.220 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:47.481 Nvme0n1 00:09:47.481 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:47.743 [ 00:09:47.743 { 00:09:47.743 "name": "Nvme0n1", 00:09:47.743 "aliases": [ 00:09:47.743 "771285bc-2c5f-4caf-b987-f34650ed1c3a" 00:09:47.743 ], 00:09:47.743 "product_name": "NVMe disk", 00:09:47.743 "block_size": 4096, 00:09:47.743 "num_blocks": 38912, 00:09:47.743 "uuid": "771285bc-2c5f-4caf-b987-f34650ed1c3a", 00:09:47.743 "assigned_rate_limits": { 00:09:47.743 "rw_ios_per_sec": 0, 00:09:47.743 "rw_mbytes_per_sec": 0, 00:09:47.743 "r_mbytes_per_sec": 0, 00:09:47.743 "w_mbytes_per_sec": 0 00:09:47.743 }, 00:09:47.743 "claimed": false, 00:09:47.743 "zoned": false, 00:09:47.743 "supported_io_types": { 00:09:47.743 "read": true, 00:09:47.743 "write": true, 00:09:47.743 "unmap": true, 00:09:47.743 "flush": true, 00:09:47.743 "reset": true, 00:09:47.743 "nvme_admin": true, 00:09:47.743 "nvme_io": true, 00:09:47.743 "nvme_io_md": false, 00:09:47.743 "write_zeroes": true, 00:09:47.743 "zcopy": false, 00:09:47.743 "get_zone_info": false, 00:09:47.743 "zone_management": false, 00:09:47.743 "zone_append": false, 00:09:47.743 "compare": true, 00:09:47.743 "compare_and_write": true, 00:09:47.743 "abort": true, 00:09:47.743 "seek_hole": false, 00:09:47.743 "seek_data": false, 00:09:47.743 "copy": true, 00:09:47.743 "nvme_iov_md": false 00:09:47.743 }, 00:09:47.743 "memory_domains": [ 00:09:47.743 { 00:09:47.743 "dma_device_id": "system", 00:09:47.743 "dma_device_type": 1 00:09:47.743 } 00:09:47.743 ], 00:09:47.743 "driver_specific": { 00:09:47.743 "nvme": [ 00:09:47.743 { 00:09:47.743 "trid": { 00:09:47.743 "trtype": "TCP", 00:09:47.743 "adrfam": "IPv4", 00:09:47.743 "traddr": "10.0.0.2", 00:09:47.743 "trsvcid": "4420", 00:09:47.743 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:47.743 }, 00:09:47.743 "ctrlr_data": { 00:09:47.743 "cntlid": 1, 00:09:47.743 "vendor_id": "0x8086", 00:09:47.743 "model_number": "SPDK bdev Controller", 00:09:47.743 "serial_number": "SPDK0", 00:09:47.743 "firmware_revision": "24.09", 00:09:47.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.743 "oacs": { 00:09:47.743 "security": 0, 00:09:47.743 "format": 0, 00:09:47.743 "firmware": 0, 00:09:47.743 "ns_manage": 0 00:09:47.743 }, 00:09:47.743 "multi_ctrlr": true, 00:09:47.743 "ana_reporting": false 00:09:47.743 }, 00:09:47.743 "vs": { 00:09:47.743 "nvme_version": "1.3" 00:09:47.743 }, 00:09:47.743 "ns_data": { 00:09:47.743 "id": 1, 00:09:47.743 "can_share": true 00:09:47.743 } 00:09:47.743 } 00:09:47.743 ], 00:09:47.743 "mp_policy": "active_passive" 00:09:47.743 } 00:09:47.743 } 00:09:47.743 ] 00:09:47.743 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1272935 00:09:47.743 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:47.743 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.743 Running I/O for 10 seconds... 00:09:48.688 Latency(us) 00:09:48.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.688 Nvme0n1 : 1.00 17516.00 68.42 0.00 0.00 0.00 0.00 0.00 00:09:48.688 =================================================================================================================== 00:09:48.688 Total : 17516.00 68.42 0.00 0.00 0.00 0.00 0.00 00:09:48.688 00:09:49.631 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:49.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.891 Nvme0n1 : 2.00 17670.00 69.02 0.00 0.00 0.00 0.00 0.00 00:09:49.891 =================================================================================================================== 00:09:49.892 Total : 17670.00 69.02 0.00 0.00 0.00 0.00 0.00 00:09:49.892 00:09:49.892 true 00:09:49.892 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:49.892 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:50.152 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:50.152 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:50.152 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1272935 00:09:50.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.724 Nvme0n1 : 3.00 17713.33 69.19 0.00 0.00 0.00 0.00 0.00 00:09:50.724 =================================================================================================================== 00:09:50.724 Total : 17713.33 69.19 0.00 0.00 0.00 0.00 0.00 00:09:50.724 00:09:52.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.110 Nvme0n1 : 4.00 17757.00 69.36 0.00 0.00 0.00 0.00 0.00 00:09:52.110 =================================================================================================================== 00:09:52.110 Total : 17757.00 69.36 0.00 0.00 0.00 0.00 0.00 00:09:52.110 00:09:52.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.680 Nvme0n1 : 5.00 17789.60 69.49 0.00 0.00 0.00 0.00 0.00 00:09:52.680 =================================================================================================================== 00:09:52.680 Total : 17789.60 69.49 0.00 0.00 0.00 0.00 0.00 00:09:52.680 00:09:54.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.077 Nvme0n1 : 6.00 17815.33 69.59 0.00 0.00 0.00 0.00 0.00 00:09:54.077 =================================================================================================================== 00:09:54.077 Total : 17815.33 69.59 0.00 0.00 0.00 0.00 0.00 00:09:54.077 00:09:55.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.050 Nvme0n1 : 7.00 17836.00 69.67 0.00 0.00 0.00 0.00 0.00 00:09:55.050 =================================================================================================================== 00:09:55.050 Total : 17836.00 69.67 0.00 0.00 0.00 0.00 0.00 00:09:55.050 00:09:55.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.991 Nvme0n1 : 8.00 17853.50 69.74 0.00 0.00 0.00 0.00 0.00 00:09:55.991 =================================================================================================================== 00:09:55.991 Total : 17853.50 69.74 0.00 0.00 0.00 0.00 0.00 00:09:55.991 00:09:56.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.933 Nvme0n1 : 9.00 17867.11 69.79 0.00 0.00 0.00 0.00 0.00 00:09:56.933 =================================================================================================================== 00:09:56.933 Total : 17867.11 69.79 0.00 0.00 0.00 0.00 0.00 00:09:56.933 00:09:57.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.875 Nvme0n1 : 10.00 17879.60 69.84 0.00 0.00 0.00 0.00 0.00 00:09:57.875 =================================================================================================================== 00:09:57.875 Total : 17879.60 69.84 0.00 0.00 0.00 0.00 0.00 00:09:57.875 00:09:57.875 00:09:57.875 Latency(us) 00:09:57.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.875 Nvme0n1 : 10.01 17879.07 69.84 0.00 0.00 7154.22 5597.87 19005.44 00:09:57.875 =================================================================================================================== 00:09:57.875 Total : 17879.07 69.84 0.00 0.00 7154.22 5597.87 19005.44 00:09:57.875 0 00:09:57.875 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1272684 00:09:57.875 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1272684 ']' 00:09:57.875 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1272684 00:09:57.875 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:57.875 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.875 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272684 00:09:57.875 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:57.875 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:57.875 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272684' 00:09:57.875 killing process with pid 1272684 00:09:57.875 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1272684 00:09:57.875 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.875 00:09:57.875 Latency(us) 00:09:57.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.875 =================================================================================================================== 00:09:57.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.875 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1272684 00:09:58.136 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.136 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.398 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:58.398 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.659 [2024-07-25 16:49:18.815711] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:58.659 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:58.920 request: 00:09:58.920 { 00:09:58.920 "uuid": "87f62ef8-6134-434a-89b0-d2aee1db1644", 00:09:58.920 "method": "bdev_lvol_get_lvstores", 00:09:58.920 "req_id": 1 00:09:58.920 } 00:09:58.920 Got JSON-RPC error response 00:09:58.920 response: 00:09:58.920 { 00:09:58.920 "code": -19, 00:09:58.920 "message": "No such device" 00:09:58.920 } 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:58.920 aio_bdev 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 771285bc-2c5f-4caf-b987-f34650ed1c3a 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=771285bc-2c5f-4caf-b987-f34650ed1c3a 00:09:58.920 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.921 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:58.921 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.921 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.921 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:59.181 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 771285bc-2c5f-4caf-b987-f34650ed1c3a -t 2000 00:09:59.181 [ 00:09:59.181 { 00:09:59.181 "name": "771285bc-2c5f-4caf-b987-f34650ed1c3a", 00:09:59.181 "aliases": [ 00:09:59.181 "lvs/lvol" 00:09:59.181 ], 00:09:59.181 "product_name": "Logical Volume", 00:09:59.181 "block_size": 4096, 00:09:59.181 "num_blocks": 38912, 00:09:59.181 "uuid": "771285bc-2c5f-4caf-b987-f34650ed1c3a", 00:09:59.181 "assigned_rate_limits": { 00:09:59.182 "rw_ios_per_sec": 0, 00:09:59.182 "rw_mbytes_per_sec": 0, 00:09:59.182 "r_mbytes_per_sec": 0, 00:09:59.182 "w_mbytes_per_sec": 0 00:09:59.182 }, 00:09:59.182 "claimed": false, 00:09:59.182 "zoned": false, 00:09:59.182 "supported_io_types": { 00:09:59.182 "read": true, 00:09:59.182 "write": true, 00:09:59.182 "unmap": true, 00:09:59.182 "flush": false, 00:09:59.182 "reset": true, 00:09:59.182 "nvme_admin": false, 00:09:59.182 "nvme_io": false, 00:09:59.182 "nvme_io_md": false, 00:09:59.182 "write_zeroes": true, 00:09:59.182 "zcopy": false, 00:09:59.182 "get_zone_info": false, 00:09:59.182 "zone_management": false, 00:09:59.182 "zone_append": false, 00:09:59.182 "compare": false, 00:09:59.182 "compare_and_write": false, 00:09:59.182 "abort": false, 00:09:59.182 "seek_hole": true, 00:09:59.182 "seek_data": true, 00:09:59.182 "copy": false, 00:09:59.182 "nvme_iov_md": false 00:09:59.182 }, 00:09:59.182 "driver_specific": { 00:09:59.182 "lvol": { 00:09:59.182 "lvol_store_uuid": "87f62ef8-6134-434a-89b0-d2aee1db1644", 00:09:59.182 "base_bdev": "aio_bdev", 00:09:59.182 "thin_provision": false, 00:09:59.182 "num_allocated_clusters": 38, 00:09:59.182 "snapshot": false, 00:09:59.182 "clone": false, 00:09:59.182 "esnap_clone": false 00:09:59.182 } 00:09:59.182 } 00:09:59.182 } 00:09:59.182 ] 00:09:59.442 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:59.442 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:59.442 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:59.442 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:59.442 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:59.442 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:59.703 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:59.703 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 771285bc-2c5f-4caf-b987-f34650ed1c3a 00:09:59.703 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87f62ef8-6134-434a-89b0-d2aee1db1644 00:09:59.964 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:00.226 00:10:00.226 real 0m15.254s 00:10:00.226 user 0m14.939s 00:10:00.226 sys 0m1.308s 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:00.226 ************************************ 00:10:00.226 END TEST lvs_grow_clean 00:10:00.226 ************************************ 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:00.226 ************************************ 00:10:00.226 START TEST lvs_grow_dirty 00:10:00.226 ************************************ 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:00.226 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.488 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:00.488 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:00.488 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7487037f-aa72-4a37-8e49-f50cd9936432 00:10:00.488 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:00.488 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:00.750 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:00.750 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:00.750 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7487037f-aa72-4a37-8e49-f50cd9936432 lvol 150 00:10:01.011 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:01.011 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.011 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:01.011 [2024-07-25 16:49:21.209345] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:01.011 [2024-07-25 16:49:21.209400] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:01.011 true 00:10:01.011 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:01.011 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:01.271 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:01.271 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:01.271 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:01.532 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:01.794 [2024-07-25 16:49:21.835285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.794 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1275816 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1275816 /var/tmp/bdevperf.sock 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1275816 ']' 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:01.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.794 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.794 [2024-07-25 16:49:22.050584] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:01.794 [2024-07-25 16:49:22.050636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275816 ] 00:10:02.056 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.056 [2024-07-25 16:49:22.126131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.056 [2024-07-25 16:49:22.179910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.627 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.627 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:02.627 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:03.199 Nvme0n1 00:10:03.200 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:03.200 [ 00:10:03.200 { 00:10:03.200 "name": "Nvme0n1", 00:10:03.200 "aliases": [ 00:10:03.200 "60e95249-2b7b-4a45-96ed-371989dfb9b9" 00:10:03.200 ], 00:10:03.200 "product_name": "NVMe disk", 00:10:03.200 "block_size": 4096, 00:10:03.200 "num_blocks": 38912, 00:10:03.200 "uuid": "60e95249-2b7b-4a45-96ed-371989dfb9b9", 00:10:03.200 "assigned_rate_limits": { 00:10:03.200 "rw_ios_per_sec": 0, 00:10:03.200 "rw_mbytes_per_sec": 0, 00:10:03.200 "r_mbytes_per_sec": 0, 00:10:03.200 "w_mbytes_per_sec": 0 00:10:03.200 }, 00:10:03.200 "claimed": false, 00:10:03.200 "zoned": false, 00:10:03.200 "supported_io_types": { 00:10:03.200 "read": true, 00:10:03.200 "write": true, 00:10:03.200 "unmap": true, 00:10:03.200 "flush": true, 00:10:03.200 "reset": true, 00:10:03.200 "nvme_admin": true, 00:10:03.200 "nvme_io": true, 00:10:03.200 "nvme_io_md": false, 00:10:03.200 "write_zeroes": true, 00:10:03.200 "zcopy": false, 00:10:03.200 "get_zone_info": false, 00:10:03.200 "zone_management": false, 00:10:03.200 "zone_append": false, 00:10:03.200 "compare": true, 00:10:03.200 "compare_and_write": true, 00:10:03.200 "abort": true, 00:10:03.200 "seek_hole": false, 00:10:03.200 "seek_data": false, 00:10:03.200 "copy": true, 00:10:03.200 "nvme_iov_md": false 00:10:03.200 }, 00:10:03.200 "memory_domains": [ 00:10:03.200 { 00:10:03.200 "dma_device_id": "system", 00:10:03.200 "dma_device_type": 1 00:10:03.200 } 00:10:03.200 ], 00:10:03.200 "driver_specific": { 00:10:03.200 "nvme": [ 00:10:03.200 { 00:10:03.200 "trid": { 00:10:03.200 "trtype": "TCP", 00:10:03.200 "adrfam": "IPv4", 00:10:03.200 "traddr": "10.0.0.2", 00:10:03.200 "trsvcid": "4420", 00:10:03.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:03.200 }, 00:10:03.200 "ctrlr_data": { 00:10:03.200 "cntlid": 1, 00:10:03.200 "vendor_id": "0x8086", 00:10:03.200 "model_number": "SPDK bdev Controller", 00:10:03.200 "serial_number": "SPDK0", 00:10:03.200 "firmware_revision": "24.09", 00:10:03.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:03.200 "oacs": { 00:10:03.200 "security": 0, 00:10:03.200 "format": 0, 00:10:03.200 "firmware": 0, 00:10:03.200 "ns_manage": 0 00:10:03.200 }, 00:10:03.200 "multi_ctrlr": true, 00:10:03.200 "ana_reporting": false 00:10:03.200 }, 00:10:03.200 "vs": { 00:10:03.200 "nvme_version": "1.3" 00:10:03.200 }, 00:10:03.200 "ns_data": { 00:10:03.200 "id": 1, 00:10:03.200 "can_share": true 00:10:03.200 } 00:10:03.200 } 00:10:03.200 ], 00:10:03.200 "mp_policy": "active_passive" 00:10:03.200 } 00:10:03.200 } 00:10:03.200 ] 00:10:03.200 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1276027 00:10:03.200 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:03.200 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:03.200 Running I/O for 10 seconds... 00:10:04.585 Latency(us) 00:10:04.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.586 Nvme0n1 : 1.00 18077.00 70.61 0.00 0.00 0.00 0.00 0.00 00:10:04.586 =================================================================================================================== 00:10:04.586 Total : 18077.00 70.61 0.00 0.00 0.00 0.00 0.00 00:10:04.586 00:10:05.159 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:05.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.420 Nvme0n1 : 2.00 18190.50 71.06 0.00 0.00 0.00 0.00 0.00 00:10:05.420 =================================================================================================================== 00:10:05.420 Total : 18190.50 71.06 0.00 0.00 0.00 0.00 0.00 00:10:05.420 00:10:05.420 true 00:10:05.420 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:05.420 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:05.682 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:05.682 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:05.682 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1276027 00:10:06.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.251 Nvme0n1 : 3.00 18258.33 71.32 0.00 0.00 0.00 0.00 0.00 00:10:06.251 =================================================================================================================== 00:10:06.251 Total : 18258.33 71.32 0.00 0.00 0.00 0.00 0.00 00:10:06.251 00:10:07.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.206 Nvme0n1 : 4.00 18287.75 71.44 0.00 0.00 0.00 0.00 0.00 00:10:07.206 =================================================================================================================== 00:10:07.206 Total : 18287.75 71.44 0.00 0.00 0.00 0.00 0.00 00:10:07.206 00:10:08.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.591 Nvme0n1 : 5.00 18316.60 71.55 0.00 0.00 0.00 0.00 0.00 00:10:08.591 =================================================================================================================== 00:10:08.591 Total : 18316.60 71.55 0.00 0.00 0.00 0.00 0.00 00:10:08.591 00:10:09.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.534 Nvme0n1 : 6.00 18340.83 71.64 0.00 0.00 0.00 0.00 0.00 00:10:09.534 =================================================================================================================== 00:10:09.534 Total : 18340.83 71.64 0.00 0.00 0.00 0.00 0.00 00:10:09.534 00:10:10.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.477 Nvme0n1 : 7.00 18344.71 71.66 0.00 0.00 0.00 0.00 0.00 00:10:10.477 =================================================================================================================== 00:10:10.477 Total : 18344.71 71.66 0.00 0.00 0.00 0.00 0.00 00:10:10.477 00:10:11.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.419 Nvme0n1 : 8.00 18357.75 71.71 0.00 0.00 0.00 0.00 0.00 00:10:11.419 =================================================================================================================== 00:10:11.419 Total : 18357.75 71.71 0.00 0.00 0.00 0.00 0.00 00:10:11.419 00:10:12.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.363 Nvme0n1 : 9.00 18374.11 71.77 0.00 0.00 0.00 0.00 0.00 00:10:12.363 =================================================================================================================== 00:10:12.363 Total : 18374.11 71.77 0.00 0.00 0.00 0.00 0.00 00:10:12.363 00:10:13.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.331 Nvme0n1 : 10.00 18383.60 71.81 0.00 0.00 0.00 0.00 0.00 00:10:13.331 =================================================================================================================== 00:10:13.331 Total : 18383.60 71.81 0.00 0.00 0.00 0.00 0.00 00:10:13.331 00:10:13.331 00:10:13.331 Latency(us) 00:10:13.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.331 Nvme0n1 : 10.01 18386.21 71.82 0.00 0.00 6958.64 2362.03 14090.24 00:10:13.331 =================================================================================================================== 00:10:13.331 Total : 18386.21 71.82 0.00 0.00 6958.64 2362.03 14090.24 00:10:13.331 0 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1275816 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1275816 ']' 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1275816 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275816 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275816' 00:10:13.331 killing process with pid 1275816 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1275816 00:10:13.331 Received shutdown signal, test time was about 10.000000 seconds 00:10:13.331 00:10:13.331 Latency(us) 00:10:13.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.331 =================================================================================================================== 00:10:13.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:13.331 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1275816 00:10:13.592 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.592 16:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:13.853 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:13.853 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1272205 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1272205 00:10:14.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1272205 Killed "${NVMF_APP[@]}" "$@" 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1278361 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1278361 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1278361 ']' 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.113 16:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.113 [2024-07-25 16:49:34.337078] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:14.113 [2024-07-25 16:49:34.337134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.113 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.373 [2024-07-25 16:49:34.403186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.373 [2024-07-25 16:49:34.468306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.373 [2024-07-25 16:49:34.468341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.373 [2024-07-25 16:49:34.468348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.373 [2024-07-25 16:49:34.468355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.373 [2024-07-25 16:49:34.468360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.373 [2024-07-25 16:49:34.468377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.943 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:15.203 [2024-07-25 16:49:35.281177] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:15.203 [2024-07-25 16:49:35.281271] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:15.203 [2024-07-25 16:49:35.281300] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:15.203 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:15.203 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:15.203 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:15.203 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.203 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:15.204 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.204 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.204 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:15.204 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 60e95249-2b7b-4a45-96ed-371989dfb9b9 -t 2000 00:10:15.463 [ 00:10:15.463 { 00:10:15.463 "name": "60e95249-2b7b-4a45-96ed-371989dfb9b9", 00:10:15.463 "aliases": [ 00:10:15.463 "lvs/lvol" 00:10:15.463 ], 00:10:15.463 "product_name": "Logical Volume", 00:10:15.463 "block_size": 4096, 00:10:15.463 "num_blocks": 38912, 00:10:15.463 "uuid": "60e95249-2b7b-4a45-96ed-371989dfb9b9", 00:10:15.463 "assigned_rate_limits": { 00:10:15.463 "rw_ios_per_sec": 0, 00:10:15.463 "rw_mbytes_per_sec": 0, 00:10:15.463 "r_mbytes_per_sec": 0, 00:10:15.463 "w_mbytes_per_sec": 0 00:10:15.463 }, 00:10:15.463 "claimed": false, 00:10:15.463 "zoned": false, 00:10:15.463 "supported_io_types": { 00:10:15.463 "read": true, 00:10:15.463 "write": true, 00:10:15.463 "unmap": true, 00:10:15.463 "flush": false, 00:10:15.463 "reset": true, 00:10:15.463 "nvme_admin": false, 00:10:15.463 "nvme_io": false, 00:10:15.463 "nvme_io_md": false, 00:10:15.463 "write_zeroes": true, 00:10:15.463 "zcopy": false, 00:10:15.463 "get_zone_info": false, 00:10:15.463 "zone_management": false, 00:10:15.463 "zone_append": false, 00:10:15.463 "compare": false, 00:10:15.463 "compare_and_write": false, 00:10:15.463 "abort": false, 00:10:15.463 "seek_hole": true, 00:10:15.463 "seek_data": true, 00:10:15.463 "copy": false, 00:10:15.463 "nvme_iov_md": false 00:10:15.463 }, 00:10:15.463 "driver_specific": { 00:10:15.463 "lvol": { 00:10:15.463 "lvol_store_uuid": "7487037f-aa72-4a37-8e49-f50cd9936432", 00:10:15.463 "base_bdev": "aio_bdev", 00:10:15.463 "thin_provision": false, 00:10:15.463 "num_allocated_clusters": 38, 00:10:15.463 "snapshot": false, 00:10:15.463 "clone": false, 00:10:15.463 "esnap_clone": false 00:10:15.463 } 00:10:15.463 } 00:10:15.463 } 00:10:15.463 ] 00:10:15.463 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:15.463 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:15.463 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:15.723 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:15.723 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:15.723 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:15.723 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:15.723 16:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:15.984 [2024-07-25 16:49:36.061188] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:15.984 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:15.984 request: 00:10:15.984 { 00:10:15.984 "uuid": "7487037f-aa72-4a37-8e49-f50cd9936432", 00:10:15.984 "method": "bdev_lvol_get_lvstores", 00:10:15.984 "req_id": 1 00:10:15.984 } 00:10:15.984 Got JSON-RPC error response 00:10:15.984 response: 00:10:15.984 { 00:10:15.984 "code": -19, 00:10:15.984 "message": "No such device" 00:10:15.984 } 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:16.244 aio_bdev 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.244 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:16.505 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 60e95249-2b7b-4a45-96ed-371989dfb9b9 -t 2000 00:10:16.505 [ 00:10:16.505 { 00:10:16.505 "name": "60e95249-2b7b-4a45-96ed-371989dfb9b9", 00:10:16.505 "aliases": [ 00:10:16.505 "lvs/lvol" 00:10:16.505 ], 00:10:16.505 "product_name": "Logical Volume", 00:10:16.505 "block_size": 4096, 00:10:16.505 "num_blocks": 38912, 00:10:16.505 "uuid": "60e95249-2b7b-4a45-96ed-371989dfb9b9", 00:10:16.505 "assigned_rate_limits": { 00:10:16.505 "rw_ios_per_sec": 0, 00:10:16.505 "rw_mbytes_per_sec": 0, 00:10:16.505 "r_mbytes_per_sec": 0, 00:10:16.505 "w_mbytes_per_sec": 0 00:10:16.505 }, 00:10:16.505 "claimed": false, 00:10:16.505 "zoned": false, 00:10:16.505 "supported_io_types": { 00:10:16.505 "read": true, 00:10:16.505 "write": true, 00:10:16.505 "unmap": true, 00:10:16.505 "flush": false, 00:10:16.505 "reset": true, 00:10:16.505 "nvme_admin": false, 00:10:16.505 "nvme_io": false, 00:10:16.505 "nvme_io_md": false, 00:10:16.505 "write_zeroes": true, 00:10:16.505 "zcopy": false, 00:10:16.505 "get_zone_info": false, 00:10:16.505 "zone_management": false, 00:10:16.505 "zone_append": false, 00:10:16.505 "compare": false, 00:10:16.505 "compare_and_write": false, 00:10:16.505 "abort": false, 00:10:16.505 "seek_hole": true, 00:10:16.505 "seek_data": true, 00:10:16.505 "copy": false, 00:10:16.505 "nvme_iov_md": false 00:10:16.505 }, 00:10:16.505 "driver_specific": { 00:10:16.505 "lvol": { 00:10:16.505 "lvol_store_uuid": "7487037f-aa72-4a37-8e49-f50cd9936432", 00:10:16.505 "base_bdev": "aio_bdev", 00:10:16.505 "thin_provision": false, 00:10:16.505 "num_allocated_clusters": 38, 00:10:16.505 "snapshot": false, 00:10:16.505 "clone": false, 00:10:16.505 "esnap_clone": false 00:10:16.505 } 00:10:16.505 } 00:10:16.505 } 00:10:16.505 ] 00:10:16.765 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:16.765 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:16.765 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:16.765 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:16.765 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:16.765 16:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:17.025 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:17.025 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60e95249-2b7b-4a45-96ed-371989dfb9b9 00:10:17.025 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7487037f-aa72-4a37-8e49-f50cd9936432 00:10:17.286 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:17.547 00:10:17.547 real 0m17.280s 00:10:17.547 user 0m44.690s 00:10:17.547 sys 0m3.017s 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:17.547 ************************************ 00:10:17.547 END TEST lvs_grow_dirty 00:10:17.547 ************************************ 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:17.547 nvmf_trace.0 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.547 rmmod nvme_tcp 00:10:17.547 rmmod nvme_fabrics 00:10:17.547 rmmod nvme_keyring 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1278361 ']' 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1278361 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1278361 ']' 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1278361 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.547 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1278361 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1278361' 00:10:17.808 killing process with pid 1278361 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1278361 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1278361 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.808 16:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.358 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:20.358 00:10:20.358 real 0m43.458s 00:10:20.358 user 1m5.726s 00:10:20.358 sys 0m10.069s 00:10:20.358 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.358 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:20.358 ************************************ 00:10:20.358 END TEST nvmf_lvs_grow 00:10:20.358 ************************************ 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.359 ************************************ 00:10:20.359 START TEST nvmf_bdev_io_wait 00:10:20.359 ************************************ 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:20.359 * Looking for test storage... 00:10:20.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.359 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:26.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:26.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.957 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:26.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:26.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.958 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:10:26.958 00:10:26.958 --- 10.0.0.2 ping statistics --- 00:10:26.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.958 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:10:26.958 00:10:26.958 --- 10.0.0.1 ping statistics --- 00:10:26.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.958 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1283109 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1283109 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1283109 ']' 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.958 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.958 [2024-07-25 16:49:47.123510] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:26.958 [2024-07-25 16:49:47.123564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.958 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.958 [2024-07-25 16:49:47.191482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.220 [2024-07-25 16:49:47.260974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.220 [2024-07-25 16:49:47.261011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.220 [2024-07-25 16:49:47.261019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.220 [2024-07-25 16:49:47.261025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.220 [2024-07-25 16:49:47.261031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.220 [2024-07-25 16:49:47.261169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.221 [2024-07-25 16:49:47.261311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.221 [2024-07-25 16:49:47.261559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.221 [2024-07-25 16:49:47.261560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.793 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.793 [2024-07-25 16:49:48.001839] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.793 Malloc0 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.793 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.056 [2024-07-25 16:49:48.079336] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1283408 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1283411 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.056 { 00:10:28.056 "params": { 00:10:28.056 "name": "Nvme$subsystem", 00:10:28.056 "trtype": "$TEST_TRANSPORT", 00:10:28.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.056 "adrfam": "ipv4", 00:10:28.056 "trsvcid": "$NVMF_PORT", 00:10:28.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.056 "hdgst": ${hdgst:-false}, 00:10:28.056 "ddgst": ${ddgst:-false} 00:10:28.056 }, 00:10:28.056 "method": "bdev_nvme_attach_controller" 00:10:28.056 } 00:10:28.056 EOF 00:10:28.056 )") 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1283414 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1283417 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.056 { 00:10:28.056 "params": { 00:10:28.056 "name": "Nvme$subsystem", 00:10:28.056 "trtype": "$TEST_TRANSPORT", 00:10:28.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.056 "adrfam": "ipv4", 00:10:28.056 "trsvcid": "$NVMF_PORT", 00:10:28.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.056 "hdgst": ${hdgst:-false}, 00:10:28.056 "ddgst": ${ddgst:-false} 00:10:28.056 }, 00:10:28.056 "method": "bdev_nvme_attach_controller" 00:10:28.056 } 00:10:28.056 EOF 00:10:28.056 )") 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.056 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.057 { 00:10:28.057 "params": { 00:10:28.057 "name": "Nvme$subsystem", 00:10:28.057 "trtype": "$TEST_TRANSPORT", 00:10:28.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.057 "adrfam": "ipv4", 00:10:28.057 "trsvcid": "$NVMF_PORT", 00:10:28.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.057 "hdgst": ${hdgst:-false}, 00:10:28.057 "ddgst": ${ddgst:-false} 00:10:28.057 }, 00:10:28.057 "method": "bdev_nvme_attach_controller" 00:10:28.057 } 00:10:28.057 EOF 00:10:28.057 )") 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.057 { 00:10:28.057 "params": { 00:10:28.057 "name": "Nvme$subsystem", 00:10:28.057 "trtype": "$TEST_TRANSPORT", 00:10:28.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.057 "adrfam": "ipv4", 00:10:28.057 "trsvcid": "$NVMF_PORT", 00:10:28.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.057 "hdgst": ${hdgst:-false}, 00:10:28.057 "ddgst": ${ddgst:-false} 00:10:28.057 }, 00:10:28.057 "method": "bdev_nvme_attach_controller" 00:10:28.057 } 00:10:28.057 EOF 00:10:28.057 )") 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1283408 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.057 "params": { 00:10:28.057 "name": "Nvme1", 00:10:28.057 "trtype": "tcp", 00:10:28.057 "traddr": "10.0.0.2", 00:10:28.057 "adrfam": "ipv4", 00:10:28.057 "trsvcid": "4420", 00:10:28.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.057 "hdgst": false, 00:10:28.057 "ddgst": false 00:10:28.057 }, 00:10:28.057 "method": "bdev_nvme_attach_controller" 00:10:28.057 }' 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.057 "params": { 00:10:28.057 "name": "Nvme1", 00:10:28.057 "trtype": "tcp", 00:10:28.057 "traddr": "10.0.0.2", 00:10:28.057 "adrfam": "ipv4", 00:10:28.057 "trsvcid": "4420", 00:10:28.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.057 "hdgst": false, 00:10:28.057 "ddgst": false 00:10:28.057 }, 00:10:28.057 "method": "bdev_nvme_attach_controller" 00:10:28.057 }' 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.057 "params": { 00:10:28.057 "name": "Nvme1", 00:10:28.057 "trtype": "tcp", 00:10:28.057 "traddr": "10.0.0.2", 00:10:28.057 "adrfam": "ipv4", 00:10:28.057 "trsvcid": "4420", 00:10:28.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.057 "hdgst": false, 00:10:28.057 "ddgst": false 00:10:28.057 }, 00:10:28.057 "method": "bdev_nvme_attach_controller" 00:10:28.057 }' 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.057 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.057 "params": { 00:10:28.057 "name": "Nvme1", 00:10:28.057 "trtype": "tcp", 00:10:28.057 "traddr": "10.0.0.2", 00:10:28.057 "adrfam": "ipv4", 00:10:28.057 "trsvcid": "4420", 00:10:28.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.057 "hdgst": false, 00:10:28.057 "ddgst": false 00:10:28.057 }, 00:10:28.057 "method": "bdev_nvme_attach_controller" 00:10:28.057 }' 00:10:28.057 [2024-07-25 16:49:48.132463] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:28.057 [2024-07-25 16:49:48.132514] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:28.057 [2024-07-25 16:49:48.134919] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:28.057 [2024-07-25 16:49:48.134970] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:28.057 [2024-07-25 16:49:48.136171] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:28.057 [2024-07-25 16:49:48.136224] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:28.057 [2024-07-25 16:49:48.137980] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:28.057 [2024-07-25 16:49:48.138026] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:28.057 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.057 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.057 [2024-07-25 16:49:48.277934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.057 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.057 [2024-07-25 16:49:48.328467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:28.319 [2024-07-25 16:49:48.336090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.319 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.319 [2024-07-25 16:49:48.381953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.319 [2024-07-25 16:49:48.387952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:28.319 [2024-07-25 16:49:48.431773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:28.319 [2024-07-25 16:49:48.432535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.319 [2024-07-25 16:49:48.481756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:28.319 Running I/O for 1 seconds... 00:10:28.580 Running I/O for 1 seconds... 00:10:28.580 Running I/O for 1 seconds... 00:10:28.580 Running I/O for 1 seconds... 00:10:29.524 00:10:29.524 Latency(us) 00:10:29.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.524 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:29.524 Nvme1n1 : 1.01 11756.82 45.93 0.00 0.00 10829.19 3713.71 24794.45 00:10:29.524 =================================================================================================================== 00:10:29.524 Total : 11756.82 45.93 0.00 0.00 10829.19 3713.71 24794.45 00:10:29.524 00:10:29.524 Latency(us) 00:10:29.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.524 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:29.524 Nvme1n1 : 1.01 13095.99 51.16 0.00 0.00 9736.70 6225.92 20971.52 00:10:29.524 =================================================================================================================== 00:10:29.524 Total : 13095.99 51.16 0.00 0.00 9736.70 6225.92 20971.52 00:10:29.524 00:10:29.524 Latency(us) 00:10:29.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.524 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:29.524 Nvme1n1 : 1.00 12120.39 47.35 0.00 0.00 10541.85 2717.01 26760.53 00:10:29.524 =================================================================================================================== 00:10:29.524 Total : 12120.39 47.35 0.00 0.00 10541.85 2717.01 26760.53 00:10:29.524 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1283411 00:10:29.524 00:10:29.524 Latency(us) 00:10:29.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.524 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:29.524 Nvme1n1 : 1.00 187665.88 733.07 0.00 0.00 678.74 271.36 856.75 00:10:29.524 =================================================================================================================== 00:10:29.524 Total : 187665.88 733.07 0.00 0.00 678.74 271.36 856.75 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1283414 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1283417 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.785 rmmod nvme_tcp 00:10:29.785 rmmod nvme_fabrics 00:10:29.785 rmmod nvme_keyring 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1283109 ']' 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1283109 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1283109 ']' 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1283109 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.785 16:49:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1283109 00:10:29.785 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.785 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.785 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1283109' 00:10:29.785 killing process with pid 1283109 00:10:29.785 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1283109 00:10:29.785 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1283109 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.047 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.605 00:10:32.605 real 0m12.102s 00:10:32.605 user 0m18.943s 00:10:32.605 sys 0m6.409s 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.605 ************************************ 00:10:32.605 END TEST nvmf_bdev_io_wait 00:10:32.605 ************************************ 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.605 ************************************ 00:10:32.605 START TEST nvmf_queue_depth 00:10:32.605 ************************************ 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:32.605 * Looking for test storage... 00:10:32.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.605 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.606 16:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:39.196 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:39.196 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:39.196 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.196 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:39.196 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.197 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:10:39.497 00:10:39.497 --- 10.0.0.2 ping statistics --- 00:10:39.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.497 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:39.497 00:10:39.497 --- 10.0.0.1 ping statistics --- 00:10:39.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.497 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1287836 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1287836 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1287836 ']' 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.497 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 [2024-07-25 16:49:59.654557] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:39.497 [2024-07-25 16:49:59.654624] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.497 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.760 [2024-07-25 16:49:59.744949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.760 [2024-07-25 16:49:59.830039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.760 [2024-07-25 16:49:59.830091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.760 [2024-07-25 16:49:59.830099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.760 [2024-07-25 16:49:59.830106] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.760 [2024-07-25 16:49:59.830112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.760 [2024-07-25 16:49:59.830141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 [2024-07-25 16:50:00.489181] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 Malloc0 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 [2024-07-25 16:50:00.545374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1288181 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1288181 /var/tmp/bdevperf.sock 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1288181 ']' 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:40.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.334 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 [2024-07-25 16:50:00.598904] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:10:40.334 [2024-07-25 16:50:00.598955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288181 ] 00:10:40.595 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.595 [2024-07-25 16:50:00.659090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.595 [2024-07-25 16:50:00.726960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.168 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.168 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:41.168 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:41.168 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.168 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:41.429 NVMe0n1 00:10:41.429 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.429 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:41.429 Running I/O for 10 seconds... 00:10:51.434 00:10:51.434 Latency(us) 00:10:51.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:51.434 Verification LBA range: start 0x0 length 0x4000 00:10:51.434 NVMe0n1 : 10.06 11558.23 45.15 0.00 0.00 88250.05 18240.85 72089.60 00:10:51.434 =================================================================================================================== 00:10:51.434 Total : 11558.23 45.15 0.00 0.00 88250.05 18240.85 72089.60 00:10:51.434 0 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1288181 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1288181 ']' 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1288181 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288181 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288181' 00:10:51.434 killing process with pid 1288181 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1288181 00:10:51.434 Received shutdown signal, test time was about 10.000000 seconds 00:10:51.434 00:10:51.434 Latency(us) 00:10:51.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.434 =================================================================================================================== 00:10:51.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:51.434 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1288181 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.696 rmmod nvme_tcp 00:10:51.696 rmmod nvme_fabrics 00:10:51.696 rmmod nvme_keyring 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1287836 ']' 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1287836 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1287836 ']' 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1287836 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287836 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287836' 00:10:51.696 killing process with pid 1287836 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1287836 00:10:51.696 16:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1287836 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.958 16:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.873 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.873 00:10:53.873 real 0m21.816s 00:10:53.873 user 0m25.329s 00:10:53.873 sys 0m6.513s 00:10:53.873 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.873 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.873 ************************************ 00:10:53.873 END TEST nvmf_queue_depth 00:10:53.873 ************************************ 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.136 ************************************ 00:10:54.136 START TEST nvmf_target_multipath 00:10:54.136 ************************************ 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:54.136 * Looking for test storage... 00:10:54.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.136 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:02.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.287 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:02.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:02.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:02.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:11:02.288 00:11:02.288 --- 10.0.0.2 ping statistics --- 00:11:02.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.288 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:11:02.288 00:11:02.288 --- 10.0.0.1 ping statistics --- 00:11:02.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.288 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:02.288 only one NIC for nvmf test 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.288 rmmod nvme_tcp 00:11:02.288 rmmod nvme_fabrics 00:11:02.288 rmmod nvme_keyring 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.288 16:50:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.686 00:11:03.686 real 0m9.407s 00:11:03.686 user 0m2.004s 00:11:03.686 sys 0m5.328s 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 ************************************ 00:11:03.686 END TEST nvmf_target_multipath 00:11:03.686 ************************************ 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 ************************************ 00:11:03.686 START TEST nvmf_zcopy 00:11:03.686 ************************************ 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:03.686 * Looking for test storage... 00:11:03.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.686 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.687 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.687 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:03.687 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:03.687 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.687 16:50:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:10.298 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:10.298 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:10.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.298 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:10.299 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.299 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:11:10.561 00:11:10.561 --- 10.0.0.2 ping statistics --- 00:11:10.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.561 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:11:10.561 00:11:10.561 --- 10.0.0.1 ping statistics --- 00:11:10.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.561 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1298541 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1298541 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1298541 ']' 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.561 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 [2024-07-25 16:50:30.872716] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:11:10.823 [2024-07-25 16:50:30.872786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.823 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.823 [2024-07-25 16:50:30.963023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.823 [2024-07-25 16:50:31.054892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.823 [2024-07-25 16:50:31.054944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.823 [2024-07-25 16:50:31.054952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.823 [2024-07-25 16:50:31.054959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.823 [2024-07-25 16:50:31.054965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.823 [2024-07-25 16:50:31.054988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.395 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.395 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:11.395 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.395 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.395 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 [2024-07-25 16:50:31.704852] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 [2024-07-25 16:50:31.729054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 malloc0 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:11.657 { 00:11:11.657 "params": { 00:11:11.657 "name": "Nvme$subsystem", 00:11:11.657 "trtype": "$TEST_TRANSPORT", 00:11:11.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.657 "adrfam": "ipv4", 00:11:11.657 "trsvcid": "$NVMF_PORT", 00:11:11.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.657 "hdgst": ${hdgst:-false}, 00:11:11.657 "ddgst": ${ddgst:-false} 00:11:11.657 }, 00:11:11.657 "method": "bdev_nvme_attach_controller" 00:11:11.657 } 00:11:11.657 EOF 00:11:11.657 )") 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:11.657 16:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:11.657 "params": { 00:11:11.657 "name": "Nvme1", 00:11:11.657 "trtype": "tcp", 00:11:11.657 "traddr": "10.0.0.2", 00:11:11.657 "adrfam": "ipv4", 00:11:11.657 "trsvcid": "4420", 00:11:11.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.657 "hdgst": false, 00:11:11.657 "ddgst": false 00:11:11.657 }, 00:11:11.657 "method": "bdev_nvme_attach_controller" 00:11:11.657 }' 00:11:11.657 [2024-07-25 16:50:31.837532] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:11:11.658 [2024-07-25 16:50:31.837604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298871 ] 00:11:11.658 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.658 [2024-07-25 16:50:31.903705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.919 [2024-07-25 16:50:31.977970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.919 Running I/O for 10 seconds... 00:11:21.999 00:11:21.999 Latency(us) 00:11:21.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.999 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:21.999 Verification LBA range: start 0x0 length 0x1000 00:11:21.999 Nvme1n1 : 10.01 9590.94 74.93 0.00 0.00 13294.63 2170.88 40632.32 00:11:21.999 =================================================================================================================== 00:11:21.999 Total : 9590.94 74.93 0.00 0.00 13294.63 2170.88 40632.32 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1300890 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.261 { 00:11:22.261 "params": { 00:11:22.261 "name": "Nvme$subsystem", 00:11:22.261 "trtype": "$TEST_TRANSPORT", 00:11:22.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.261 "adrfam": "ipv4", 00:11:22.261 "trsvcid": "$NVMF_PORT", 00:11:22.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.261 "hdgst": ${hdgst:-false}, 00:11:22.261 "ddgst": ${ddgst:-false} 00:11:22.261 }, 00:11:22.261 "method": "bdev_nvme_attach_controller" 00:11:22.261 } 00:11:22.261 EOF 00:11:22.261 )") 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:22.261 [2024-07-25 16:50:42.301368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.261 [2024-07-25 16:50:42.301398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:22.261 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.261 "params": { 00:11:22.261 "name": "Nvme1", 00:11:22.261 "trtype": "tcp", 00:11:22.261 "traddr": "10.0.0.2", 00:11:22.261 "adrfam": "ipv4", 00:11:22.261 "trsvcid": "4420", 00:11:22.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.261 "hdgst": false, 00:11:22.261 "ddgst": false 00:11:22.261 }, 00:11:22.261 "method": "bdev_nvme_attach_controller" 00:11:22.261 }' 00:11:22.261 [2024-07-25 16:50:42.313372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.313382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.325401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.325409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.337433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.337441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.342057] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:11:22.262 [2024-07-25 16:50:42.342104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300890 ] 00:11:22.262 [2024-07-25 16:50:42.349463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.349471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.361494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.361502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.262 [2024-07-25 16:50:42.373524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.373532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.385557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.385564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.397586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.397594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.400019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.262 [2024-07-25 16:50:42.409616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.409625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.421647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.421655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.433677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.433688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.445707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.445717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.457738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.457746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.463644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.262 [2024-07-25 16:50:42.469768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.469777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.481806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.481820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.493834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.493843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.505863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.505871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.517894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.517901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.262 [2024-07-25 16:50:42.529925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.262 [2024-07-25 16:50:42.529932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.541967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.541982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.553991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.554001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.566021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.566030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.578050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.578058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.590083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.590090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.602114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.602121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.614147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.614157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.626180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.626191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.638223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.638235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.650250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.650263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 Running I/O for 5 seconds... 00:11:22.524 [2024-07-25 16:50:42.676325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.676342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.690396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.690412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.703708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.703725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.716829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.716845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.729748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.729764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.742768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.742784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.755872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.755888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.768783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.768799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.782119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.782135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.524 [2024-07-25 16:50:42.795069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.524 [2024-07-25 16:50:42.795085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.808281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.808297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.821349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.821365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.834228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.834244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.847499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.847515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.860564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.860580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.873773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.873789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.887004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.887019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.899839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.899855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.912763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.912779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.925619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.925634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.938691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.938706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.785 [2024-07-25 16:50:42.951181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.785 [2024-07-25 16:50:42.951197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:42.964138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:42.964153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:42.976327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:42.976343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:42.989553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:42.989569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:43.002076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:43.002091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:43.014362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:43.014377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:43.027238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:43.027252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:43.040284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:43.040299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.786 [2024-07-25 16:50:43.053187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.786 [2024-07-25 16:50:43.053208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.066591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.066607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.079946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.079961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.092416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.092432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.105539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.105555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.118282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.118297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.131511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.131527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.144689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.144704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.157676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.157692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.170834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.170850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.183609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.183625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.196518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.196533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.209317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.209332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.221846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.221868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.235083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.235098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.047 [2024-07-25 16:50:43.248395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.047 [2024-07-25 16:50:43.248411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.048 [2024-07-25 16:50:43.261581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.048 [2024-07-25 16:50:43.261597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.048 [2024-07-25 16:50:43.274495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.048 [2024-07-25 16:50:43.274511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.048 [2024-07-25 16:50:43.287035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.048 [2024-07-25 16:50:43.287051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.048 [2024-07-25 16:50:43.299969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.048 [2024-07-25 16:50:43.299985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.048 [2024-07-25 16:50:43.312627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.048 [2024-07-25 16:50:43.312642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.325684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.325700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.338736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.338752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.351571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.351587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.364397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.364412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.377642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.377657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.390492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.390507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.403722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.403737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.416459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.416474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.429584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.429599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.441797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.441812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.454100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.454114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.467167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.310 [2024-07-25 16:50:43.467186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.310 [2024-07-25 16:50:43.479856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.479871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.493258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.493273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.506327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.506342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.519606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.519621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.532265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.532280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.545308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.545323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.558417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.558432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.311 [2024-07-25 16:50:43.571558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.311 [2024-07-25 16:50:43.571573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.584289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.584304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.597667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.597682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.610312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.610327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.622892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.622908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.635755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.635770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.648712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.648726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.661428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.661442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.674463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.674477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.687205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.687220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.699808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.699823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.712419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.712438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.725313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.725328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.738267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.738281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.751344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.751359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.764516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.764531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.577 [2024-07-25 16:50:43.776916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.577 [2024-07-25 16:50:43.776931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.578 [2024-07-25 16:50:43.789704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.578 [2024-07-25 16:50:43.789719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.578 [2024-07-25 16:50:43.802617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.578 [2024-07-25 16:50:43.802632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.578 [2024-07-25 16:50:43.815742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.578 [2024-07-25 16:50:43.815757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.578 [2024-07-25 16:50:43.828905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.578 [2024-07-25 16:50:43.828920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.578 [2024-07-25 16:50:43.841590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.578 [2024-07-25 16:50:43.841605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.854538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.854553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.867587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.867603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.880509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.880524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.893442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.893457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.906806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.906822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.920038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.920054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.933176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.933191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.946197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.946216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.959468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.959487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.972629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.972645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.985527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.985542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:43.998879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:43.998894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.011336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.011351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.023775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.023790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.036398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.036414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.048953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.048968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.062065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.062080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.075246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.075263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.087740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.087756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.839 [2024-07-25 16:50:44.100695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.839 [2024-07-25 16:50:44.100710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.113737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.113752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.126963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.126978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.139805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.139820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.152952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.152967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.164922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.164937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.177943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.177958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.190689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.190704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.203334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.203349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.216162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.216177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.229385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.229401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.242335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.242350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.254867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.254882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.268091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.268106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.281021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.281036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.294097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.294113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.307429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.307445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.320357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.100 [2024-07-25 16:50:44.320373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.100 [2024-07-25 16:50:44.333171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.101 [2024-07-25 16:50:44.333186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.101 [2024-07-25 16:50:44.346138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.101 [2024-07-25 16:50:44.346154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.101 [2024-07-25 16:50:44.359001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.101 [2024-07-25 16:50:44.359016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.101 [2024-07-25 16:50:44.372315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.101 [2024-07-25 16:50:44.372331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.385358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.385374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.398447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.398463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.411362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.411378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.424110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.424125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.437126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.437141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.450125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.450140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.463009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.463024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.476053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.476068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.489210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.489225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.502469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.502484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.515056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.515071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.528274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.528289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.541067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.541082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.554277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.554292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.567410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.567425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.362 [2024-07-25 16:50:44.580214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.362 [2024-07-25 16:50:44.580230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.363 [2024-07-25 16:50:44.593266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.363 [2024-07-25 16:50:44.593282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.363 [2024-07-25 16:50:44.606607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.363 [2024-07-25 16:50:44.606622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.363 [2024-07-25 16:50:44.619597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.363 [2024-07-25 16:50:44.619613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.363 [2024-07-25 16:50:44.632230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.363 [2024-07-25 16:50:44.632246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.645102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.645117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.658016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.658032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.671943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.671959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.683557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.683572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.696545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.696561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.709424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.709439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.722589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.722605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.735320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.735335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.748339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.748354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.761496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.761512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.774068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.774083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.786978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.786994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.800183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.800198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.813154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.813169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.825953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.825968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.838640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.624 [2024-07-25 16:50:44.838656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.624 [2024-07-25 16:50:44.851557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.625 [2024-07-25 16:50:44.851572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.625 [2024-07-25 16:50:44.864619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.625 [2024-07-25 16:50:44.864635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.625 [2024-07-25 16:50:44.877809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.625 [2024-07-25 16:50:44.877824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.625 [2024-07-25 16:50:44.890022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.625 [2024-07-25 16:50:44.890037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.902916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.902932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.916101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.916116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.929021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.929037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.942198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.942218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.954879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.954895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.967076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.967092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.980363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.980378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:44.993299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:44.993314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.886 [2024-07-25 16:50:45.005692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.886 [2024-07-25 16:50:45.005707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.018824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.018839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.032019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.032034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.044529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.044544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.057651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.057667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.070484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.070500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.083626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.083641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.096476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.096491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.109490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.109505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.122234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.122249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.135288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.135304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.142990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.143005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.887 [2024-07-25 16:50:45.152129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.887 [2024-07-25 16:50:45.152143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.148 [2024-07-25 16:50:45.160064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.148 [2024-07-25 16:50:45.160083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.148 [2024-07-25 16:50:45.168729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.148 [2024-07-25 16:50:45.168744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.148 [2024-07-25 16:50:45.177540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.148 [2024-07-25 16:50:45.177554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.148 [2024-07-25 16:50:45.186356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.148 [2024-07-25 16:50:45.186372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.148 [2024-07-25 16:50:45.195587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.195601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.203973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.203987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.212505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.212519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.221330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.221344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.230102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.230116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.238956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.238971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.247828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.247844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.256759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.256773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.265650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.265664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.274275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.274289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.283003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.283018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.292063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.292078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.300462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.300477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.309240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.309254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.317616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.317631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.326831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.326850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.334576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.334591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.343486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.343501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.352465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.352480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.361378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.361393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.370287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.370301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.378580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.378594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.387474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.387488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.396328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.396343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.405288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.405303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.149 [2024-07-25 16:50:45.414082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.149 [2024-07-25 16:50:45.414097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.417 [2024-07-25 16:50:45.422611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.417 [2024-07-25 16:50:45.422625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.417 [2024-07-25 16:50:45.431068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.417 [2024-07-25 16:50:45.431082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.417 [2024-07-25 16:50:45.439938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.417 [2024-07-25 16:50:45.439952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.448774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.448788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.457590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.457604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.465867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.465881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.474726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.474740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.483578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.483593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.492480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.492498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.501458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.501472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.510358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.510373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.518996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.519010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.527732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.527747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.536658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.536672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.545660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.545675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.554560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.554574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.563271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.563285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.572112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.572126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.580489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.580503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.589405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.589420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.598231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.598245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.606884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.606899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.615780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.615794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.624759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.624773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.633441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.633456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.642570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.642585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.651377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.651392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.660231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.660249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.668623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.668638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.677732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.677747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.418 [2024-07-25 16:50:45.686080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.418 [2024-07-25 16:50:45.686095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.695136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.695151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.704036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.704051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.712965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.712980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.721885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.721900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.730307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.730322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.739321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.739336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.748233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.748248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.757001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.757016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.765636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.765650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.774309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.774324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.783178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.783193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.791853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.791868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.800566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.800580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.809413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.809427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.818145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.818160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.827017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.827032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.835842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.835857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.844739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.844753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.853532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.853546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.862422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.862436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.871112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.871126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.879773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.879788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.888734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.888749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.897532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.897547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.906151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.906166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.915021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.915035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.923809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.923823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.932763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.932778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.941364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.941378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.680 [2024-07-25 16:50:45.950471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.680 [2024-07-25 16:50:45.950485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:45.959325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:45.959340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:45.968223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:45.968237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:45.977100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:45.977115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:45.985350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:45.985364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:45.994407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:45.994422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:46.003266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:46.003281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:46.012069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:46.012083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:46.021050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:46.021065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.942 [2024-07-25 16:50:46.029901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.942 [2024-07-25 16:50:46.029917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.038637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.038652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.047576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.047591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.056287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.056302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.065149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.065164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.073970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.073985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.082519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.082534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.091389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.091404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.100293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.100308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.109110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.109124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.117578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.117593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.126120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.126136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.135306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.135322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.144075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.144089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.152764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.152778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.161701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.161716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.171120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.171136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.179956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.179971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.188538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.188553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.197453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.197469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.205018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.205033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.943 [2024-07-25 16:50:46.214138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.943 [2024-07-25 16:50:46.214154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.223003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.223018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.231880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.231895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.241090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.241105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.249927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.249941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.258927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.258943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.267784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.267798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.276364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.276379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.285456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.285471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.294379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.294394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.303215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.303230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.312083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.312098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.320927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.320941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.330163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.330178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.338494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.338509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.347577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.347593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.355859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.355875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.364642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.364657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.373583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.373598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.381865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.381880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.390460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.390474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.399214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.399229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.408077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.408092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.416977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.416992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.425916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.425931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.434297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.434312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.442775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.205 [2024-07-25 16:50:46.442789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.205 [2024-07-25 16:50:46.451425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.206 [2024-07-25 16:50:46.451440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.206 [2024-07-25 16:50:46.460249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.206 [2024-07-25 16:50:46.460264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.206 [2024-07-25 16:50:46.469104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.206 [2024-07-25 16:50:46.469118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.206 [2024-07-25 16:50:46.478039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.206 [2024-07-25 16:50:46.478054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.467 [2024-07-25 16:50:46.486787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.467 [2024-07-25 16:50:46.486806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.467 [2024-07-25 16:50:46.495298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.467 [2024-07-25 16:50:46.495314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.467 [2024-07-25 16:50:46.504123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.504138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.512849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.512864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.521724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.521738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.530848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.530863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.539167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.539182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.547978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.547993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.556222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.556237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.565143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.565158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.573764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.573779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.582566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.582581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.591438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.591454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.600407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.600422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.609100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.609115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.617822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.617837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.626826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.626842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.636170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.636185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.644533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.644548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.653605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.653624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.662500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.662515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.671373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.671388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.680104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.680119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.688931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.688946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.697928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.697942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.706704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.706718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.715268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.715283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.723927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.723942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.468 [2024-07-25 16:50:46.732713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.468 [2024-07-25 16:50:46.732728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.741471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.741485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.750423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.750437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.759278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.759292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.768199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.768219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.777335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.777349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.786214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.786229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.729 [2024-07-25 16:50:46.795051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.729 [2024-07-25 16:50:46.795065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.803943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.803957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.812933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.812947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.821952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.821970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.830179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.830193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.838542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.838556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.847542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.847556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.856320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.856334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.865325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.865339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.873753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.873767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.882820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.882834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.891598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.891612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.900136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.900150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.908612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.908627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.917968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.917984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.926497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.926512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.934955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.934969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.943949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.943964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.952764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.952779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.961342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.961357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.969968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.969983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.978763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.978778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.987682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.987700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-07-25 16:50:46.996306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-07-25 16:50:46.996320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.004724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.004739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.013497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.013511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.022452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.022467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.031351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.031365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.040262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.040276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.049388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.049402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.057715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.057729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.066662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.066676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.074912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.074926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.083709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.083724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.092361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.092376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.101172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.101186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.109481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.109495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.118712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.118727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.127609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.127623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.135808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.135822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.144433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.144448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.153045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.153060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.166271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.166286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.175174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.175188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.184121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.184136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.193022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.193036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.201791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.201805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.210289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.210303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.219066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.219080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.228100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.228114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.236480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.236495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.245208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.245223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.254055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.254070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.992 [2024-07-25 16:50:47.262729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.992 [2024-07-25 16:50:47.262745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.271614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.271629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.280661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.280675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.288304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.288319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.297230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.297244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.306167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.306182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.315080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.315095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.323294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.323309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.332399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.332414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.341301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.341316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.350307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.350321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.359216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.359231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.368369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.368384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.376729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.376743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.385746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.385760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.394222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.394237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.403106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.403120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.412025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.412040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.420784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.420798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.430093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.430108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.438471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.438485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.447409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.447424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.456244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.456258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.464787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.464801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.473447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.473461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.482175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.482190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.491151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.491165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.499932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.499946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.508818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.508832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.517737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.517751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.254 [2024-07-25 16:50:47.526664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.254 [2024-07-25 16:50:47.526678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.535064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.535079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.543959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.543973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.552752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.552766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.561773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.561787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.570311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.570326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.579227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.579241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.588021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.588036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.596900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.596914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.605672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.605686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.614682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.614697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.623078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.623092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.632233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.632247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.641208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.641223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.649917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.649932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.658703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.658718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 [2024-07-25 16:50:47.666913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.666927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.516 00:11:27.516 Latency(us) 00:11:27.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.516 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:27.516 Nvme1n1 : 5.01 19810.95 154.77 0.00 0.00 6454.05 2362.03 31675.73 00:11:27.516 =================================================================================================================== 00:11:27.516 Total : 19810.95 154.77 0.00 0.00 6454.05 2362.03 31675.73 00:11:27.516 [2024-07-25 16:50:47.673104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.516 [2024-07-25 16:50:47.673117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.681123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.681136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.689142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.689150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.697167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.697178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.705188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.705197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.713207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.713215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.721227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.721235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.729246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.729254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.737263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.737271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.745284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.745293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.753303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.753310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.761326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.761336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.769346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.769354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.777366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.777380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.517 [2024-07-25 16:50:47.785387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.517 [2024-07-25 16:50:47.785398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.777 [2024-07-25 16:50:47.793405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.777 [2024-07-25 16:50:47.793413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.777 [2024-07-25 16:50:47.801426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.777 [2024-07-25 16:50:47.801434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1300890) - No such process 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1300890 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.777 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.777 delay0 00:11:27.778 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.778 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:27.778 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.778 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.778 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.778 16:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:27.778 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.778 [2024-07-25 16:50:47.937263] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:34.364 Initializing NVMe Controllers 00:11:34.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:34.364 Initialization complete. Launching workers. 00:11:34.364 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 248, failed: 14338 00:11:34.364 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14466, failed to submit 120 00:11:34.364 success 14408, unsuccess 58, failed 0 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.364 rmmod nvme_tcp 00:11:34.364 rmmod nvme_fabrics 00:11:34.364 rmmod nvme_keyring 00:11:34.364 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1298541 ']' 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1298541 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1298541 ']' 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1298541 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1298541 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1298541' 00:11:34.624 killing process with pid 1298541 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1298541 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1298541 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.624 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.170 00:11:37.170 real 0m33.204s 00:11:37.170 user 0m43.281s 00:11:37.170 sys 0m12.013s 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.170 ************************************ 00:11:37.170 END TEST nvmf_zcopy 00:11:37.170 ************************************ 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.170 ************************************ 00:11:37.170 START TEST nvmf_nmic 00:11:37.170 ************************************ 00:11:37.170 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:37.170 * Looking for test storage... 00:11:37.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.170 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.171 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.831 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.832 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.832 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.832 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.832 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.832 16:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.832 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.832 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.832 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.832 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:11:44.094 00:11:44.094 --- 10.0.0.2 ping statistics --- 00:11:44.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.094 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:11:44.094 00:11:44.094 --- 10.0.0.1 ping statistics --- 00:11:44.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.094 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1307684 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1307684 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1307684 ']' 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.094 16:51:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.094 [2024-07-25 16:51:04.281060] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:11:44.094 [2024-07-25 16:51:04.281113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.094 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.094 [2024-07-25 16:51:04.348399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.355 [2024-07-25 16:51:04.417460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.355 [2024-07-25 16:51:04.417500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.355 [2024-07-25 16:51:04.417507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.355 [2024-07-25 16:51:04.417514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.355 [2024-07-25 16:51:04.417519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.355 [2024-07-25 16:51:04.417663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.355 [2024-07-25 16:51:04.417778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.355 [2024-07-25 16:51:04.417935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.355 [2024-07-25 16:51:04.417936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 [2024-07-25 16:51:05.103176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 Malloc0 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.927 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 [2024-07-25 16:51:05.162704] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:44.928 test case1: single bdev can't be used in multiple subsystems 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.928 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.928 [2024-07-25 16:51:05.198603] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:44.928 [2024-07-25 16:51:05.198622] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:44.928 [2024-07-25 16:51:05.198630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.189 request: 00:11:45.189 { 00:11:45.189 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:45.189 "namespace": { 00:11:45.189 "bdev_name": "Malloc0", 00:11:45.189 "no_auto_visible": false 00:11:45.189 }, 00:11:45.189 "method": "nvmf_subsystem_add_ns", 00:11:45.189 "req_id": 1 00:11:45.189 } 00:11:45.189 Got JSON-RPC error response 00:11:45.189 response: 00:11:45.189 { 00:11:45.189 "code": -32602, 00:11:45.189 "message": "Invalid parameters" 00:11:45.189 } 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:45.189 Adding namespace failed - expected result. 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:45.189 test case2: host connect to nvmf target in multiple paths 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:45.189 [2024-07-25 16:51:05.210718] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.189 16:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.572 16:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:48.488 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.488 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.488 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.488 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.488 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:50.402 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:50.402 [global] 00:11:50.402 thread=1 00:11:50.402 invalidate=1 00:11:50.402 rw=write 00:11:50.402 time_based=1 00:11:50.402 runtime=1 00:11:50.402 ioengine=libaio 00:11:50.402 direct=1 00:11:50.402 bs=4096 00:11:50.402 iodepth=1 00:11:50.402 norandommap=0 00:11:50.402 numjobs=1 00:11:50.402 00:11:50.402 verify_dump=1 00:11:50.402 verify_backlog=512 00:11:50.402 verify_state_save=0 00:11:50.402 do_verify=1 00:11:50.402 verify=crc32c-intel 00:11:50.402 [job0] 00:11:50.402 filename=/dev/nvme0n1 00:11:50.402 Could not set queue depth (nvme0n1) 00:11:50.663 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.663 fio-3.35 00:11:50.663 Starting 1 thread 00:11:52.048 00:11:52.048 job0: (groupid=0, jobs=1): err= 0: pid=1309504: Thu Jul 25 16:51:11 2024 00:11:52.048 read: IOPS=11, BW=46.2KiB/s (47.3kB/s)(48.0KiB/1039msec) 00:11:52.048 slat (nsec): min=25334, max=26818, avg=26362.25, stdev=436.18 00:11:52.048 clat (usec): min=41906, max=42965, avg=42184.48, stdev=402.40 00:11:52.048 lat (usec): min=41932, max=42992, avg=42210.85, stdev=402.36 00:11:52.048 clat percentiles (usec): 00:11:52.048 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:52.048 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:52.048 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:11:52.048 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:52.048 | 99.99th=[42730] 00:11:52.048 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:52.048 slat (usec): min=9, max=28213, avg=88.03, stdev=1245.41 00:11:52.048 clat (usec): min=687, max=1198, avg=943.65, stdev=57.83 00:11:52.048 lat (usec): min=700, max=29239, avg=1031.68, stdev=1250.44 00:11:52.048 clat percentiles (usec): 00:11:52.048 | 1.00th=[ 758], 5.00th=[ 824], 10.00th=[ 857], 20.00th=[ 906], 00:11:52.048 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:11:52.048 | 70.00th=[ 971], 80.00th=[ 979], 90.00th=[ 996], 95.00th=[ 1004], 00:11:52.048 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1205], 99.95th=[ 1205], 00:11:52.048 | 99.99th=[ 1205] 00:11:52.048 bw ( KiB/s): min= 264, max= 3832, per=100.00%, avg=2048.00, stdev=2522.96, samples=2 00:11:52.048 iops : min= 66, max= 958, avg=512.00, stdev=630.74, samples=2 00:11:52.048 lat (usec) : 750=0.76%, 1000=90.27% 00:11:52.048 lat (msec) : 2=6.68%, 50=2.29% 00:11:52.048 cpu : usr=0.29%, sys=2.22%, ctx=528, majf=0, minf=1 00:11:52.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.048 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.048 00:11:52.048 Run status group 0 (all jobs): 00:11:52.048 READ: bw=46.2KiB/s (47.3kB/s), 46.2KiB/s-46.2KiB/s (47.3kB/s-47.3kB/s), io=48.0KiB (49.2kB), run=1039-1039msec 00:11:52.048 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:11:52.048 00:11:52.048 Disk stats (read/write): 00:11:52.048 nvme0n1: ios=33/512, merge=0/0, ticks=1311/469, in_queue=1780, util=98.90% 00:11:52.048 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:52.048 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.049 rmmod nvme_tcp 00:11:52.049 rmmod nvme_fabrics 00:11:52.049 rmmod nvme_keyring 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1307684 ']' 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1307684 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1307684 ']' 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1307684 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1307684 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1307684' 00:11:52.049 killing process with pid 1307684 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1307684 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1307684 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.049 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.310 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.310 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:54.226 00:11:54.226 real 0m17.420s 00:11:54.226 user 0m50.009s 00:11:54.226 sys 0m6.044s 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.226 ************************************ 00:11:54.226 END TEST nvmf_nmic 00:11:54.226 ************************************ 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.226 ************************************ 00:11:54.226 START TEST nvmf_fio_target 00:11:54.226 ************************************ 00:11:54.226 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.487 * Looking for test storage... 00:11:54.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.487 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.487 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:54.487 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.487 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.488 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:01.079 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:01.080 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:01.080 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:01.080 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:01.080 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.080 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:12:01.342 00:12:01.342 --- 10.0.0.2 ping statistics --- 00:12:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.342 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:12:01.342 00:12:01.342 --- 10.0.0.1 ping statistics --- 00:12:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.342 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1314025 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1314025 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1314025 ']' 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.342 16:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.342 [2024-07-25 16:51:21.606457] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:12:01.343 [2024-07-25 16:51:21.606520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.603 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.603 [2024-07-25 16:51:21.677682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.603 [2024-07-25 16:51:21.752653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.603 [2024-07-25 16:51:21.752692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.603 [2024-07-25 16:51:21.752699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.603 [2024-07-25 16:51:21.752706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.603 [2024-07-25 16:51:21.752711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.603 [2024-07-25 16:51:21.752850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.603 [2024-07-25 16:51:21.752971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.603 [2024-07-25 16:51:21.753131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.603 [2024-07-25 16:51:21.753132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.174 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:02.435 [2024-07-25 16:51:22.567533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.435 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.696 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:02.696 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.696 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:02.696 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.957 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:02.957 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.262 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:03.262 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:03.262 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.527 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:03.527 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.788 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:03.788 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.788 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:03.788 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:04.049 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.049 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:04.049 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.310 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:04.310 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.571 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.571 [2024-07-25 16:51:24.784669] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.571 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:04.833 16:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:05.094 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:06.482 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:06.482 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.482 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.482 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:06.482 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:06.482 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:08.398 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:08.659 [global] 00:12:08.659 thread=1 00:12:08.659 invalidate=1 00:12:08.659 rw=write 00:12:08.659 time_based=1 00:12:08.659 runtime=1 00:12:08.659 ioengine=libaio 00:12:08.659 direct=1 00:12:08.659 bs=4096 00:12:08.659 iodepth=1 00:12:08.659 norandommap=0 00:12:08.659 numjobs=1 00:12:08.659 00:12:08.659 verify_dump=1 00:12:08.659 verify_backlog=512 00:12:08.659 verify_state_save=0 00:12:08.659 do_verify=1 00:12:08.659 verify=crc32c-intel 00:12:08.659 [job0] 00:12:08.659 filename=/dev/nvme0n1 00:12:08.659 [job1] 00:12:08.659 filename=/dev/nvme0n2 00:12:08.659 [job2] 00:12:08.659 filename=/dev/nvme0n3 00:12:08.659 [job3] 00:12:08.659 filename=/dev/nvme0n4 00:12:08.659 Could not set queue depth (nvme0n1) 00:12:08.659 Could not set queue depth (nvme0n2) 00:12:08.659 Could not set queue depth (nvme0n3) 00:12:08.659 Could not set queue depth (nvme0n4) 00:12:08.920 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.920 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.920 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.920 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.920 fio-3.35 00:12:08.920 Starting 4 threads 00:12:10.307 00:12:10.307 job0: (groupid=0, jobs=1): err= 0: pid=1315624: Thu Jul 25 16:51:30 2024 00:12:10.307 read: IOPS=11, BW=47.0KiB/s (48.1kB/s)(48.0KiB/1021msec) 00:12:10.307 slat (nsec): min=24342, max=24711, avg=24555.50, stdev=132.05 00:12:10.307 clat (usec): min=41851, max=42148, avg=41981.63, stdev=84.87 00:12:10.307 lat (usec): min=41876, max=42173, avg=42006.18, stdev=84.94 00:12:10.307 clat percentiles (usec): 00:12:10.307 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:10.307 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:10.307 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:10.307 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:10.307 | 99.99th=[42206] 00:12:10.307 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:12:10.308 slat (nsec): min=31727, max=51584, avg=33010.11, stdev=1682.45 00:12:10.308 clat (usec): min=647, max=1632, avg=965.86, stdev=88.68 00:12:10.308 lat (usec): min=680, max=1664, avg=998.87, stdev=88.51 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 898], 00:12:10.308 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:12:10.308 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:12:10.308 | 99.00th=[ 1188], 99.50th=[ 1434], 99.90th=[ 1631], 99.95th=[ 1631], 00:12:10.308 | 99.99th=[ 1631] 00:12:10.308 bw ( KiB/s): min= 128, max= 3968, per=23.39%, avg=2048.00, stdev=2715.29, samples=2 00:12:10.308 iops : min= 32, max= 992, avg=512.00, stdev=678.82, samples=2 00:12:10.308 lat (usec) : 750=0.38%, 1000=64.31% 00:12:10.308 lat (msec) : 2=33.02%, 50=2.29% 00:12:10.308 cpu : usr=0.98%, sys=1.47%, ctx=526, majf=0, minf=1 00:12:10.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.308 job1: (groupid=0, jobs=1): err= 0: pid=1315625: Thu Jul 25 16:51:30 2024 00:12:10.308 read: IOPS=465, BW=1862KiB/s (1907kB/s)(1864KiB/1001msec) 00:12:10.308 slat (nsec): min=26689, max=47743, avg=27995.74, stdev=3238.68 00:12:10.308 clat (usec): min=936, max=2168, avg=1242.57, stdev=111.43 00:12:10.308 lat (usec): min=964, max=2196, avg=1270.56, stdev=111.17 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 979], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1156], 00:12:10.308 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1287], 00:12:10.308 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1352], 95.00th=[ 1385], 00:12:10.308 | 99.00th=[ 1467], 99.50th=[ 1483], 99.90th=[ 2180], 99.95th=[ 2180], 00:12:10.308 | 99.99th=[ 2180] 00:12:10.308 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:10.308 slat (nsec): min=9843, max=65233, avg=32005.47, stdev=10221.78 00:12:10.308 clat (usec): min=512, max=2536, avg=747.96, stdev=119.52 00:12:10.308 lat (usec): min=529, max=2576, avg=779.97, stdev=121.53 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 553], 5.00th=[ 619], 10.00th=[ 635], 20.00th=[ 685], 00:12:10.308 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 742], 60.00th=[ 758], 00:12:10.308 | 70.00th=[ 775], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 881], 00:12:10.308 | 99.00th=[ 963], 99.50th=[ 996], 99.90th=[ 2540], 99.95th=[ 2540], 00:12:10.308 | 99.99th=[ 2540] 00:12:10.308 bw ( KiB/s): min= 4096, max= 4096, per=46.78%, avg=4096.00, stdev= 0.00, samples=1 00:12:10.308 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:10.308 lat (usec) : 750=27.91%, 1000=24.85% 00:12:10.308 lat (msec) : 2=47.03%, 4=0.20% 00:12:10.308 cpu : usr=1.40%, sys=4.60%, ctx=979, majf=0, minf=1 00:12:10.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 issued rwts: total=466,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.308 job2: (groupid=0, jobs=1): err= 0: pid=1315631: Thu Jul 25 16:51:30 2024 00:12:10.308 read: IOPS=362, BW=1451KiB/s (1485kB/s)(1452KiB/1001msec) 00:12:10.308 slat (nsec): min=6814, max=44214, avg=26064.57, stdev=3484.60 00:12:10.308 clat (usec): min=465, max=1756, avg=1264.12, stdev=171.36 00:12:10.308 lat (usec): min=491, max=1800, avg=1290.18, stdev=172.27 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 545], 5.00th=[ 922], 10.00th=[ 1057], 20.00th=[ 1205], 00:12:10.308 | 30.00th=[ 1270], 40.00th=[ 1287], 50.00th=[ 1303], 60.00th=[ 1319], 00:12:10.308 | 70.00th=[ 1336], 80.00th=[ 1369], 90.00th=[ 1401], 95.00th=[ 1434], 00:12:10.308 | 99.00th=[ 1516], 99.50th=[ 1680], 99.90th=[ 1762], 99.95th=[ 1762], 00:12:10.308 | 99.99th=[ 1762] 00:12:10.308 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:10.308 slat (nsec): min=11781, max=82816, avg=34729.48, stdev=3821.82 00:12:10.308 clat (usec): min=732, max=1350, avg=986.81, stdev=94.92 00:12:10.308 lat (usec): min=765, max=1433, avg=1021.54, stdev=95.24 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 914], 00:12:10.308 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:12:10.308 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1106], 95.00th=[ 1172], 00:12:10.308 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1352], 00:12:10.308 | 99.99th=[ 1352] 00:12:10.308 bw ( KiB/s): min= 3872, max= 3872, per=44.22%, avg=3872.00, stdev= 0.00, samples=1 00:12:10.308 iops : min= 968, max= 968, avg=968.00, stdev= 0.00, samples=1 00:12:10.308 lat (usec) : 500=0.11%, 750=1.14%, 1000=36.57% 00:12:10.308 lat (msec) : 2=62.17% 00:12:10.308 cpu : usr=0.90%, sys=3.30%, ctx=876, majf=0, minf=1 00:12:10.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 issued rwts: total=363,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.308 job3: (groupid=0, jobs=1): err= 0: pid=1315633: Thu Jul 25 16:51:30 2024 00:12:10.308 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:10.308 slat (nsec): min=6644, max=44710, avg=24887.26, stdev=4577.17 00:12:10.308 clat (usec): min=598, max=1690, avg=1003.84, stdev=209.30 00:12:10.308 lat (usec): min=614, max=1716, avg=1028.73, stdev=209.92 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 627], 5.00th=[ 709], 10.00th=[ 766], 20.00th=[ 840], 00:12:10.308 | 30.00th=[ 889], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 988], 00:12:10.308 | 70.00th=[ 1045], 80.00th=[ 1237], 90.00th=[ 1336], 95.00th=[ 1385], 00:12:10.308 | 99.00th=[ 1467], 99.50th=[ 1532], 99.90th=[ 1696], 99.95th=[ 1696], 00:12:10.308 | 99.99th=[ 1696] 00:12:10.308 write: IOPS=698, BW=2793KiB/s (2860kB/s)(2796KiB/1001msec); 0 zone resets 00:12:10.308 slat (nsec): min=9532, max=71788, avg=29577.59, stdev=9577.84 00:12:10.308 clat (usec): min=153, max=1339, avg=633.66, stdev=204.28 00:12:10.308 lat (usec): min=164, max=1374, avg=663.24, stdev=206.05 00:12:10.308 clat percentiles (usec): 00:12:10.308 | 1.00th=[ 281], 5.00th=[ 343], 10.00th=[ 408], 20.00th=[ 461], 00:12:10.308 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 644], 00:12:10.308 | 70.00th=[ 709], 80.00th=[ 824], 90.00th=[ 947], 95.00th=[ 1004], 00:12:10.308 | 99.00th=[ 1156], 99.50th=[ 1254], 99.90th=[ 1336], 99.95th=[ 1336], 00:12:10.308 | 99.99th=[ 1336] 00:12:10.308 bw ( KiB/s): min= 4096, max= 4096, per=46.78%, avg=4096.00, stdev= 0.00, samples=1 00:12:10.308 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:10.308 lat (usec) : 250=0.08%, 500=14.95%, 750=31.54%, 1000=35.01% 00:12:10.308 lat (msec) : 2=18.41% 00:12:10.308 cpu : usr=2.60%, sys=3.10%, ctx=1213, majf=0, minf=1 00:12:10.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.308 issued rwts: total=512,699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.308 00:12:10.308 Run status group 0 (all jobs): 00:12:10.308 READ: bw=5301KiB/s (5428kB/s), 47.0KiB/s-2046KiB/s (48.1kB/s-2095kB/s), io=5412KiB (5542kB), run=1001-1021msec 00:12:10.308 WRITE: bw=8756KiB/s (8966kB/s), 2006KiB/s-2793KiB/s (2054kB/s-2860kB/s), io=8940KiB (9155kB), run=1001-1021msec 00:12:10.308 00:12:10.308 Disk stats (read/write): 00:12:10.308 nvme0n1: ios=31/512, merge=0/0, ticks=1212/509, in_queue=1721, util=96.19% 00:12:10.308 nvme0n2: ios=353/512, merge=0/0, ticks=882/330, in_queue=1212, util=96.69% 00:12:10.308 nvme0n3: ios=228/512, merge=0/0, ticks=1174/471, in_queue=1645, util=96.48% 00:12:10.308 nvme0n4: ios=508/512, merge=0/0, ticks=1345/257, in_queue=1602, util=96.51% 00:12:10.308 16:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:10.308 [global] 00:12:10.308 thread=1 00:12:10.308 invalidate=1 00:12:10.308 rw=randwrite 00:12:10.308 time_based=1 00:12:10.308 runtime=1 00:12:10.308 ioengine=libaio 00:12:10.308 direct=1 00:12:10.308 bs=4096 00:12:10.308 iodepth=1 00:12:10.308 norandommap=0 00:12:10.308 numjobs=1 00:12:10.308 00:12:10.308 verify_dump=1 00:12:10.308 verify_backlog=512 00:12:10.308 verify_state_save=0 00:12:10.308 do_verify=1 00:12:10.308 verify=crc32c-intel 00:12:10.308 [job0] 00:12:10.308 filename=/dev/nvme0n1 00:12:10.308 [job1] 00:12:10.308 filename=/dev/nvme0n2 00:12:10.308 [job2] 00:12:10.308 filename=/dev/nvme0n3 00:12:10.308 [job3] 00:12:10.308 filename=/dev/nvme0n4 00:12:10.308 Could not set queue depth (nvme0n1) 00:12:10.308 Could not set queue depth (nvme0n2) 00:12:10.308 Could not set queue depth (nvme0n3) 00:12:10.308 Could not set queue depth (nvme0n4) 00:12:10.569 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.569 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.569 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.569 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.569 fio-3.35 00:12:10.569 Starting 4 threads 00:12:11.955 00:12:11.955 job0: (groupid=0, jobs=1): err= 0: pid=1316151: Thu Jul 25 16:51:32 2024 00:12:11.955 read: IOPS=12, BW=51.0KiB/s (52.3kB/s)(52.0KiB/1019msec) 00:12:11.955 slat (nsec): min=25627, max=26331, avg=25879.77, stdev=188.91 00:12:11.955 clat (usec): min=1477, max=42021, avg=38831.60, stdev=11223.96 00:12:11.955 lat (usec): min=1503, max=42047, avg=38857.48, stdev=11223.94 00:12:11.955 clat percentiles (usec): 00:12:11.955 | 1.00th=[ 1483], 5.00th=[ 1483], 10.00th=[41681], 20.00th=[41681], 00:12:11.955 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:11.955 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:11.955 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:11.955 | 99.99th=[42206] 00:12:11.955 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:12:11.955 slat (nsec): min=9131, max=51885, avg=33242.81, stdev=2846.82 00:12:11.955 clat (usec): min=602, max=1903, avg=955.94, stdev=92.55 00:12:11.955 lat (usec): min=636, max=1942, avg=989.18, stdev=92.70 00:12:11.955 clat percentiles (usec): 00:12:11.955 | 1.00th=[ 742], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 889], 00:12:11.955 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 963], 60.00th=[ 988], 00:12:11.955 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:12:11.955 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1909], 99.95th=[ 1909], 00:12:11.955 | 99.99th=[ 1909] 00:12:11.955 bw ( KiB/s): min= 104, max= 3992, per=25.48%, avg=2048.00, stdev=2749.23, samples=2 00:12:11.955 iops : min= 26, max= 998, avg=512.00, stdev=687.31, samples=2 00:12:11.955 lat (usec) : 750=1.71%, 1000=64.57% 00:12:11.955 lat (msec) : 2=31.43%, 50=2.29% 00:12:11.955 cpu : usr=0.79%, sys=2.55%, ctx=528, majf=0, minf=1 00:12:11.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.955 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.955 job1: (groupid=0, jobs=1): err= 0: pid=1316152: Thu Jul 25 16:51:32 2024 00:12:11.955 read: IOPS=375, BW=1501KiB/s (1537kB/s)(1504KiB/1002msec) 00:12:11.955 slat (nsec): min=26796, max=61339, avg=27647.40, stdev=2980.29 00:12:11.955 clat (usec): min=1089, max=1434, avg=1283.32, stdev=48.61 00:12:11.955 lat (usec): min=1117, max=1462, avg=1310.97, stdev=48.77 00:12:11.955 clat percentiles (usec): 00:12:11.955 | 1.00th=[ 1123], 5.00th=[ 1188], 10.00th=[ 1221], 20.00th=[ 1254], 00:12:11.955 | 30.00th=[ 1270], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1303], 00:12:11.955 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1336], 95.00th=[ 1352], 00:12:11.955 | 99.00th=[ 1385], 99.50th=[ 1401], 99.90th=[ 1434], 99.95th=[ 1434], 00:12:11.955 | 99.99th=[ 1434] 00:12:11.955 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:11.955 slat (nsec): min=9462, max=71869, avg=34177.59, stdev=3747.00 00:12:11.955 clat (usec): min=559, max=1214, avg=936.66, stdev=85.61 00:12:11.955 lat (usec): min=571, max=1265, avg=970.84, stdev=86.14 00:12:11.955 clat percentiles (usec): 00:12:11.955 | 1.00th=[ 701], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 865], 00:12:11.955 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 963], 00:12:11.955 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:12:11.955 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1221], 99.95th=[ 1221], 00:12:11.955 | 99.99th=[ 1221] 00:12:11.955 bw ( KiB/s): min= 32, max= 4064, per=25.48%, avg=2048.00, stdev=2851.05, samples=2 00:12:11.955 iops : min= 8, max= 1016, avg=512.00, stdev=712.76, samples=2 00:12:11.955 lat (usec) : 750=1.80%, 1000=43.13% 00:12:11.955 lat (msec) : 2=55.07% 00:12:11.955 cpu : usr=3.10%, sys=2.70%, ctx=890, majf=0, minf=1 00:12:11.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.955 issued rwts: total=376,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.955 job2: (groupid=0, jobs=1): err= 0: pid=1316153: Thu Jul 25 16:51:32 2024 00:12:11.955 read: IOPS=366, BW=1465KiB/s (1500kB/s)(1468KiB/1002msec) 00:12:11.955 slat (nsec): min=25034, max=74600, avg=26125.22, stdev=3679.94 00:12:11.955 clat (usec): min=1106, max=1490, avg=1337.32, stdev=58.91 00:12:11.955 lat (usec): min=1132, max=1515, avg=1363.44, stdev=58.87 00:12:11.955 clat percentiles (usec): 00:12:11.955 | 1.00th=[ 1139], 5.00th=[ 1237], 10.00th=[ 1270], 20.00th=[ 1303], 00:12:11.955 | 30.00th=[ 1319], 40.00th=[ 1336], 50.00th=[ 1336], 60.00th=[ 1352], 00:12:11.955 | 70.00th=[ 1369], 80.00th=[ 1385], 90.00th=[ 1401], 95.00th=[ 1418], 00:12:11.955 | 99.00th=[ 1450], 99.50th=[ 1483], 99.90th=[ 1483], 99.95th=[ 1483], 00:12:11.955 | 99.99th=[ 1483] 00:12:11.955 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:11.955 slat (nsec): min=10240, max=52217, avg=32065.83, stdev=4744.08 00:12:11.955 clat (usec): min=592, max=2541, avg=924.92, stdev=137.37 00:12:11.955 lat (usec): min=624, max=2574, avg=956.98, stdev=138.53 00:12:11.955 clat percentiles (usec): 00:12:11.955 | 1.00th=[ 635], 5.00th=[ 742], 10.00th=[ 766], 20.00th=[ 848], 00:12:11.955 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 963], 00:12:11.955 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1029], 95.00th=[ 1057], 00:12:11.955 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 2540], 99.95th=[ 2540], 00:12:11.955 | 99.99th=[ 2540] 00:12:11.955 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:12:11.955 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:11.956 lat (usec) : 750=3.64%, 1000=41.64% 00:12:11.956 lat (msec) : 2=54.49%, 4=0.23% 00:12:11.956 cpu : usr=1.40%, sys=2.60%, ctx=884, majf=0, minf=1 00:12:11.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.956 issued rwts: total=367,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.956 job3: (groupid=0, jobs=1): err= 0: pid=1316154: Thu Jul 25 16:51:32 2024 00:12:11.956 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1007msec) 00:12:11.956 slat (nsec): min=25125, max=27154, avg=25569.35, stdev=433.93 00:12:11.956 clat (usec): min=41779, max=42158, avg=41951.21, stdev=95.08 00:12:11.956 lat (usec): min=41805, max=42184, avg=41976.78, stdev=95.11 00:12:11.956 clat percentiles (usec): 00:12:11.956 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:11.956 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:11.956 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:11.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:11.956 | 99.99th=[42206] 00:12:11.956 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:12:11.956 slat (nsec): min=4115, max=51135, avg=28438.54, stdev=9056.13 00:12:11.956 clat (usec): min=191, max=1542, avg=530.26, stdev=151.80 00:12:11.956 lat (usec): min=202, max=1552, avg=558.70, stdev=153.81 00:12:11.956 clat percentiles (usec): 00:12:11.956 | 1.00th=[ 206], 5.00th=[ 293], 10.00th=[ 322], 20.00th=[ 408], 00:12:11.956 | 30.00th=[ 457], 40.00th=[ 498], 50.00th=[ 537], 60.00th=[ 570], 00:12:11.956 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 742], 00:12:11.956 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1549], 99.95th=[ 1549], 00:12:11.956 | 99.99th=[ 1549] 00:12:11.956 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:12:11.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:11.956 lat (usec) : 250=3.02%, 500=35.92%, 750=53.69%, 1000=3.59% 00:12:11.956 lat (msec) : 2=0.57%, 50=3.21% 00:12:11.956 cpu : usr=0.89%, sys=1.29%, ctx=531, majf=0, minf=1 00:12:11.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.956 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.956 00:12:11.956 Run status group 0 (all jobs): 00:12:11.956 READ: bw=3034KiB/s (3107kB/s), 51.0KiB/s-1501KiB/s (52.3kB/s-1537kB/s), io=3092KiB (3166kB), run=1002-1019msec 00:12:11.956 WRITE: bw=8039KiB/s (8232kB/s), 2010KiB/s-2044KiB/s (2058kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1019msec 00:12:11.956 00:12:11.956 Disk stats (read/write): 00:12:11.956 nvme0n1: ios=61/512, merge=0/0, ticks=1640/493, in_queue=2133, util=98.88% 00:12:11.956 nvme0n2: ios=229/512, merge=0/0, ticks=494/452, in_queue=946, util=97.19% 00:12:11.956 nvme0n3: ios=254/512, merge=0/0, ticks=516/480, in_queue=996, util=97.98% 00:12:11.956 nvme0n4: ios=41/512, merge=0/0, ticks=1034/247, in_queue=1281, util=96.77% 00:12:11.956 16:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:11.956 [global] 00:12:11.956 thread=1 00:12:11.956 invalidate=1 00:12:11.956 rw=write 00:12:11.956 time_based=1 00:12:11.956 runtime=1 00:12:11.956 ioengine=libaio 00:12:11.956 direct=1 00:12:11.956 bs=4096 00:12:11.956 iodepth=128 00:12:11.956 norandommap=0 00:12:11.956 numjobs=1 00:12:11.956 00:12:11.956 verify_dump=1 00:12:11.956 verify_backlog=512 00:12:11.956 verify_state_save=0 00:12:11.956 do_verify=1 00:12:11.956 verify=crc32c-intel 00:12:11.956 [job0] 00:12:11.956 filename=/dev/nvme0n1 00:12:11.956 [job1] 00:12:11.956 filename=/dev/nvme0n2 00:12:11.956 [job2] 00:12:11.956 filename=/dev/nvme0n3 00:12:11.956 [job3] 00:12:11.956 filename=/dev/nvme0n4 00:12:12.238 Could not set queue depth (nvme0n1) 00:12:12.238 Could not set queue depth (nvme0n2) 00:12:12.238 Could not set queue depth (nvme0n3) 00:12:12.238 Could not set queue depth (nvme0n4) 00:12:12.496 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.496 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.497 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.497 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.497 fio-3.35 00:12:12.497 Starting 4 threads 00:12:13.905 00:12:13.905 job0: (groupid=0, jobs=1): err= 0: pid=1316679: Thu Jul 25 16:51:33 2024 00:12:13.905 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:12:13.905 slat (nsec): min=868, max=12192k, avg=68645.55, stdev=527457.40 00:12:13.905 clat (usec): min=1929, max=27036, avg=10448.06, stdev=3709.10 00:12:13.905 lat (usec): min=1968, max=28187, avg=10516.71, stdev=3728.53 00:12:13.905 clat percentiles (usec): 00:12:13.905 | 1.00th=[ 3425], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 8094], 00:12:13.905 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:12:13.905 | 70.00th=[11076], 80.00th=[12256], 90.00th=[14877], 95.00th=[17695], 00:12:13.905 | 99.00th=[24511], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:12:13.905 | 99.99th=[27132] 00:12:13.905 write: IOPS=6437, BW=25.1MiB/s (26.4MB/s)(25.3MiB/1007msec); 0 zone resets 00:12:13.905 slat (nsec): min=1522, max=15642k, avg=69617.98, stdev=480038.80 00:12:13.905 clat (usec): min=1026, max=25026, avg=9810.77, stdev=4240.47 00:12:13.905 lat (usec): min=1033, max=25582, avg=9880.38, stdev=4256.34 00:12:13.905 clat percentiles (usec): 00:12:13.905 | 1.00th=[ 2376], 5.00th=[ 4555], 10.00th=[ 5604], 20.00th=[ 6521], 00:12:13.905 | 30.00th=[ 7504], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:12:13.905 | 70.00th=[10683], 80.00th=[12125], 90.00th=[16057], 95.00th=[19268], 00:12:13.905 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:12:13.905 | 99.99th=[25035] 00:12:13.905 bw ( KiB/s): min=25224, max=25616, per=28.44%, avg=25420.00, stdev=277.19, samples=2 00:12:13.905 iops : min= 6306, max= 6404, avg=6355.00, stdev=69.30, samples=2 00:12:13.905 lat (msec) : 2=0.18%, 4=3.32%, 10=55.57%, 20=37.30%, 50=3.63% 00:12:13.905 cpu : usr=4.08%, sys=6.26%, ctx=515, majf=0, minf=1 00:12:13.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:13.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.905 issued rwts: total=6144,6483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.905 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.905 job1: (groupid=0, jobs=1): err= 0: pid=1316680: Thu Jul 25 16:51:33 2024 00:12:13.905 read: IOPS=3942, BW=15.4MiB/s (16.1MB/s)(16.0MiB/1039msec) 00:12:13.905 slat (nsec): min=859, max=19876k, avg=75908.73, stdev=680587.00 00:12:13.905 clat (usec): min=2012, max=44020, avg=13173.61, stdev=7135.97 00:12:13.905 lat (usec): min=2021, max=49323, avg=13249.52, stdev=7190.23 00:12:13.905 clat percentiles (usec): 00:12:13.905 | 1.00th=[ 2900], 5.00th=[ 3818], 10.00th=[ 5800], 20.00th=[ 7504], 00:12:13.905 | 30.00th=[ 8586], 40.00th=[ 9896], 50.00th=[11600], 60.00th=[13173], 00:12:13.905 | 70.00th=[15664], 80.00th=[19006], 90.00th=[22676], 95.00th=[25822], 00:12:13.905 | 99.00th=[33817], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:12:13.905 | 99.99th=[43779] 00:12:13.905 write: IOPS=4058, BW=15.9MiB/s (16.6MB/s)(16.5MiB/1039msec); 0 zone resets 00:12:13.905 slat (nsec): min=1533, max=13453k, avg=123404.09, stdev=711603.61 00:12:13.905 clat (usec): min=1228, max=53223, avg=18498.77, stdev=12340.61 00:12:13.905 lat (usec): min=1230, max=53230, avg=18622.17, stdev=12411.64 00:12:13.905 clat percentiles (usec): 00:12:13.905 | 1.00th=[ 2245], 5.00th=[ 4883], 10.00th=[ 6259], 20.00th=[ 8029], 00:12:13.905 | 30.00th=[ 9634], 40.00th=[12911], 50.00th=[15270], 60.00th=[17433], 00:12:13.905 | 70.00th=[22414], 80.00th=[27919], 90.00th=[39584], 95.00th=[44827], 00:12:13.905 | 99.00th=[50594], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:12:13.905 | 99.99th=[53216] 00:12:13.905 bw ( KiB/s): min=14176, max=18824, per=18.46%, avg=16500.00, stdev=3286.63, samples=2 00:12:13.905 iops : min= 3544, max= 4706, avg=4125.00, stdev=821.66, samples=2 00:12:13.905 lat (msec) : 2=0.37%, 4=4.68%, 10=31.60%, 20=38.30%, 50=24.29% 00:12:13.905 lat (msec) : 100=0.76% 00:12:13.905 cpu : usr=3.37%, sys=3.28%, ctx=503, majf=0, minf=1 00:12:13.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:13.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.905 issued rwts: total=4096,4217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.905 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.905 job2: (groupid=0, jobs=1): err= 0: pid=1316681: Thu Jul 25 16:51:33 2024 00:12:13.905 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:12:13.905 slat (nsec): min=931, max=17409k, avg=82603.09, stdev=592926.23 00:12:13.905 clat (usec): min=4444, max=29011, avg=10777.34, stdev=3734.39 00:12:13.905 lat (usec): min=4591, max=29013, avg=10859.94, stdev=3766.82 00:12:13.905 clat percentiles (usec): 00:12:13.905 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7898], 00:12:13.906 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10814], 00:12:13.906 | 70.00th=[11731], 80.00th=[13042], 90.00th=[14746], 95.00th=[17957], 00:12:13.906 | 99.00th=[25560], 99.50th=[26870], 99.90th=[28181], 99.95th=[28181], 00:12:13.906 | 99.99th=[28967] 00:12:13.906 write: IOPS=6337, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1005msec); 0 zone resets 00:12:13.906 slat (nsec): min=1558, max=8386.6k, avg=73343.12, stdev=453933.61 00:12:13.906 clat (usec): min=2707, max=29011, avg=9628.20, stdev=3131.86 00:12:13.906 lat (usec): min=2748, max=29014, avg=9701.54, stdev=3139.43 00:12:13.906 clat percentiles (usec): 00:12:13.906 | 1.00th=[ 3556], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 7046], 00:12:13.906 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:12:13.906 | 70.00th=[11076], 80.00th=[12518], 90.00th=[13829], 95.00th=[14877], 00:12:13.906 | 99.00th=[17695], 99.50th=[18744], 99.90th=[26084], 99.95th=[28967], 00:12:13.906 | 99.99th=[28967] 00:12:13.906 bw ( KiB/s): min=24576, max=25360, per=27.94%, avg=24968.00, stdev=554.37, samples=2 00:12:13.906 iops : min= 6144, max= 6340, avg=6242.00, stdev=138.59, samples=2 00:12:13.906 lat (msec) : 4=1.02%, 10=53.55%, 20=43.75%, 50=1.68% 00:12:13.906 cpu : usr=4.18%, sys=4.28%, ctx=579, majf=0, minf=1 00:12:13.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:13.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.906 issued rwts: total=6144,6369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.906 job3: (groupid=0, jobs=1): err= 0: pid=1316682: Thu Jul 25 16:51:33 2024 00:12:13.906 read: IOPS=5870, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1004msec) 00:12:13.906 slat (nsec): min=929, max=33946k, avg=74572.17, stdev=629961.65 00:12:13.906 clat (usec): min=2476, max=41364, avg=10699.60, stdev=5386.97 00:12:13.906 lat (usec): min=2560, max=41372, avg=10774.17, stdev=5407.04 00:12:13.906 clat percentiles (usec): 00:12:13.906 | 1.00th=[ 3130], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7570], 00:12:13.906 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10159], 00:12:13.906 | 70.00th=[11338], 80.00th=[12911], 90.00th=[15139], 95.00th=[17957], 00:12:13.906 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:13.906 | 99.99th=[41157] 00:12:13.906 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:12:13.906 slat (nsec): min=1588, max=5425.3k, avg=74178.14, stdev=388174.05 00:12:13.906 clat (usec): min=843, max=31968, avg=10376.96, stdev=4740.91 00:12:13.906 lat (usec): min=851, max=31977, avg=10451.14, stdev=4759.98 00:12:13.906 clat percentiles (usec): 00:12:13.906 | 1.00th=[ 3195], 5.00th=[ 4490], 10.00th=[ 5211], 20.00th=[ 6521], 00:12:13.906 | 30.00th=[ 7898], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 00:12:13.906 | 70.00th=[11469], 80.00th=[13566], 90.00th=[16450], 95.00th=[20055], 00:12:13.906 | 99.00th=[25560], 99.50th=[27919], 99.90th=[31851], 99.95th=[31851], 00:12:13.906 | 99.99th=[31851] 00:12:13.906 bw ( KiB/s): min=23440, max=25712, per=27.50%, avg=24576.00, stdev=1606.55, samples=2 00:12:13.906 iops : min= 5860, max= 6428, avg=6144.00, stdev=401.64, samples=2 00:12:13.906 lat (usec) : 1000=0.02% 00:12:13.906 lat (msec) : 2=0.09%, 4=2.35%, 10=57.37%, 20=36.16%, 50=4.00% 00:12:13.906 cpu : usr=3.89%, sys=6.08%, ctx=723, majf=0, minf=1 00:12:13.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:13.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.906 issued rwts: total=5894,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.906 00:12:13.906 Run status group 0 (all jobs): 00:12:13.906 READ: bw=83.8MiB/s (87.8MB/s), 15.4MiB/s-23.9MiB/s (16.1MB/s-25.0MB/s), io=87.0MiB (91.2MB), run=1004-1039msec 00:12:13.906 WRITE: bw=87.3MiB/s (91.5MB/s), 15.9MiB/s-25.1MiB/s (16.6MB/s-26.4MB/s), io=90.7MiB (95.1MB), run=1004-1039msec 00:12:13.906 00:12:13.906 Disk stats (read/write): 00:12:13.906 nvme0n1: ios=5170/5397, merge=0/0, ticks=44906/48679, in_queue=93585, util=87.27% 00:12:13.906 nvme0n2: ios=3596/3584, merge=0/0, ticks=35220/47872, in_queue=83092, util=88.70% 00:12:13.906 nvme0n3: ios=4858/5120, merge=0/0, ticks=53528/49787, in_queue=103315, util=88.41% 00:12:13.906 nvme0n4: ios=4976/5120, merge=0/0, ticks=34634/40099, in_queue=74733, util=99.79% 00:12:13.906 16:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:13.906 [global] 00:12:13.906 thread=1 00:12:13.906 invalidate=1 00:12:13.906 rw=randwrite 00:12:13.906 time_based=1 00:12:13.906 runtime=1 00:12:13.906 ioengine=libaio 00:12:13.906 direct=1 00:12:13.906 bs=4096 00:12:13.906 iodepth=128 00:12:13.906 norandommap=0 00:12:13.906 numjobs=1 00:12:13.906 00:12:13.906 verify_dump=1 00:12:13.906 verify_backlog=512 00:12:13.906 verify_state_save=0 00:12:13.906 do_verify=1 00:12:13.906 verify=crc32c-intel 00:12:13.906 [job0] 00:12:13.906 filename=/dev/nvme0n1 00:12:13.906 [job1] 00:12:13.906 filename=/dev/nvme0n2 00:12:13.906 [job2] 00:12:13.906 filename=/dev/nvme0n3 00:12:13.906 [job3] 00:12:13.906 filename=/dev/nvme0n4 00:12:13.906 Could not set queue depth (nvme0n1) 00:12:13.906 Could not set queue depth (nvme0n2) 00:12:13.906 Could not set queue depth (nvme0n3) 00:12:13.906 Could not set queue depth (nvme0n4) 00:12:14.175 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.175 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.175 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.175 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.175 fio-3.35 00:12:14.175 Starting 4 threads 00:12:15.595 00:12:15.595 job0: (groupid=0, jobs=1): err= 0: pid=1317200: Thu Jul 25 16:51:35 2024 00:12:15.595 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:12:15.595 slat (nsec): min=921, max=13775k, avg=125485.67, stdev=874889.90 00:12:15.595 clat (usec): min=5098, max=55385, avg=15759.82, stdev=6129.00 00:12:15.595 lat (usec): min=5103, max=55389, avg=15885.31, stdev=6190.24 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[10552], 00:12:15.595 | 30.00th=[11994], 40.00th=[14222], 50.00th=[15401], 60.00th=[16909], 00:12:15.595 | 70.00th=[17957], 80.00th=[19268], 90.00th=[23200], 95.00th=[25297], 00:12:15.595 | 99.00th=[39584], 99.50th=[49021], 99.90th=[55313], 99.95th=[55313], 00:12:15.595 | 99.99th=[55313] 00:12:15.595 write: IOPS=4258, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1012msec); 0 zone resets 00:12:15.595 slat (nsec): min=1633, max=10080k, avg=106954.68, stdev=661731.77 00:12:15.595 clat (usec): min=3009, max=61333, avg=14606.58, stdev=10105.48 00:12:15.595 lat (usec): min=3018, max=61339, avg=14713.53, stdev=10147.17 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 7635], 20.00th=[ 9110], 00:12:15.595 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11600], 60.00th=[13042], 00:12:15.595 | 70.00th=[15008], 80.00th=[17171], 90.00th=[21627], 95.00th=[41681], 00:12:15.595 | 99.00th=[56886], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:12:15.595 | 99.99th=[61080] 00:12:15.595 bw ( KiB/s): min=12976, max=20480, per=22.01%, avg=16728.00, stdev=5306.13, samples=2 00:12:15.595 iops : min= 3244, max= 5120, avg=4182.00, stdev=1326.53, samples=2 00:12:15.595 lat (msec) : 4=0.45%, 10=24.43%, 20=61.55%, 50=11.86%, 100=1.70% 00:12:15.595 cpu : usr=3.56%, sys=3.96%, ctx=339, majf=0, minf=1 00:12:15.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:15.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.595 issued rwts: total=4096,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.595 job1: (groupid=0, jobs=1): err= 0: pid=1317201: Thu Jul 25 16:51:35 2024 00:12:15.595 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:12:15.595 slat (nsec): min=846, max=46707k, avg=113793.10, stdev=983239.17 00:12:15.595 clat (usec): min=4620, max=59754, avg=15558.35, stdev=8734.70 00:12:15.595 lat (usec): min=4626, max=59781, avg=15672.15, stdev=8787.95 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 5211], 5.00th=[ 6587], 10.00th=[ 7570], 20.00th=[ 8717], 00:12:15.595 | 30.00th=[10814], 40.00th=[12780], 50.00th=[14222], 60.00th=[15926], 00:12:15.595 | 70.00th=[17695], 80.00th=[18744], 90.00th=[24511], 95.00th=[28705], 00:12:15.595 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:12:15.595 | 99.99th=[59507] 00:12:15.595 write: IOPS=4750, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1005msec); 0 zone resets 00:12:15.595 slat (nsec): min=1465, max=8780.1k, avg=94489.29, stdev=523798.81 00:12:15.595 clat (usec): min=1680, max=25474, avg=11602.52, stdev=3888.52 00:12:15.595 lat (usec): min=1683, max=28466, avg=11697.01, stdev=3910.03 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 3916], 5.00th=[ 5735], 10.00th=[ 6718], 20.00th=[ 8225], 00:12:15.595 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[11469], 60.00th=[12387], 00:12:15.595 | 70.00th=[13304], 80.00th=[14484], 90.00th=[17171], 95.00th=[18220], 00:12:15.595 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23987], 99.95th=[23987], 00:12:15.595 | 99.99th=[25560] 00:12:15.595 bw ( KiB/s): min=16728, max=20472, per=24.47%, avg=18600.00, stdev=2647.41, samples=2 00:12:15.595 iops : min= 4182, max= 5118, avg=4650.00, stdev=661.85, samples=2 00:12:15.595 lat (msec) : 2=0.10%, 4=0.62%, 10=30.42%, 20=59.31%, 50=8.21% 00:12:15.595 lat (msec) : 100=1.35% 00:12:15.595 cpu : usr=3.09%, sys=4.38%, ctx=519, majf=0, minf=1 00:12:15.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:15.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.595 issued rwts: total=4608,4774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.595 job2: (groupid=0, jobs=1): err= 0: pid=1317202: Thu Jul 25 16:51:35 2024 00:12:15.595 read: IOPS=3651, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1006msec) 00:12:15.595 slat (nsec): min=939, max=14435k, avg=115558.02, stdev=831671.10 00:12:15.595 clat (usec): min=2117, max=47049, avg=15955.34, stdev=7247.08 00:12:15.595 lat (usec): min=2123, max=47905, avg=16070.89, stdev=7291.34 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 2835], 5.00th=[ 5669], 10.00th=[ 8455], 20.00th=[10945], 00:12:15.595 | 30.00th=[11731], 40.00th=[13960], 50.00th=[15926], 60.00th=[16712], 00:12:15.595 | 70.00th=[18220], 80.00th=[19268], 90.00th=[23725], 95.00th=[28181], 00:12:15.595 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:12:15.595 | 99.99th=[46924] 00:12:15.595 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:12:15.595 slat (nsec): min=1571, max=13012k, avg=127575.04, stdev=764969.22 00:12:15.595 clat (usec): min=1271, max=59593, avg=16774.52, stdev=11422.55 00:12:15.595 lat (usec): min=1280, max=59598, avg=16902.10, stdev=11485.36 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 2704], 5.00th=[ 4359], 10.00th=[ 5866], 20.00th=[ 8586], 00:12:15.595 | 30.00th=[10421], 40.00th=[11994], 50.00th=[14222], 60.00th=[16450], 00:12:15.595 | 70.00th=[17695], 80.00th=[21365], 90.00th=[31851], 95.00th=[44827], 00:12:15.595 | 99.00th=[56361], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:12:15.595 | 99.99th=[59507] 00:12:15.595 bw ( KiB/s): min=13360, max=19104, per=21.36%, avg=16232.00, stdev=4061.62, samples=2 00:12:15.595 iops : min= 3340, max= 4776, avg=4058.00, stdev=1015.41, samples=2 00:12:15.595 lat (msec) : 2=0.24%, 4=3.02%, 10=18.07%, 20=58.60%, 50=18.38% 00:12:15.595 lat (msec) : 100=1.67% 00:12:15.595 cpu : usr=2.79%, sys=4.38%, ctx=356, majf=0, minf=1 00:12:15.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:15.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.595 issued rwts: total=3673,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.595 job3: (groupid=0, jobs=1): err= 0: pid=1317203: Thu Jul 25 16:51:35 2024 00:12:15.595 read: IOPS=6063, BW=23.7MiB/s (24.8MB/s)(24.7MiB/1044msec) 00:12:15.595 slat (nsec): min=938, max=7474.9k, avg=68318.20, stdev=443707.14 00:12:15.595 clat (usec): min=3776, max=50771, avg=10775.89, stdev=5919.67 00:12:15.595 lat (usec): min=3782, max=52339, avg=10844.21, stdev=5929.57 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 5145], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 8225], 00:12:15.595 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10421], 00:12:15.595 | 70.00th=[11207], 80.00th=[12256], 90.00th=[13435], 95.00th=[15139], 00:12:15.595 | 99.00th=[45876], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:12:15.595 | 99.99th=[50594] 00:12:15.595 write: IOPS=6375, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1044msec); 0 zone resets 00:12:15.595 slat (nsec): min=1562, max=7131.7k, avg=65821.49, stdev=410711.24 00:12:15.595 clat (usec): min=1083, max=23771, avg=9559.51, stdev=3826.39 00:12:15.595 lat (usec): min=1091, max=23802, avg=9625.33, stdev=3842.70 00:12:15.595 clat percentiles (usec): 00:12:15.595 | 1.00th=[ 2704], 5.00th=[ 4555], 10.00th=[ 5604], 20.00th=[ 6194], 00:12:15.595 | 30.00th=[ 6915], 40.00th=[ 8029], 50.00th=[ 8979], 60.00th=[ 9896], 00:12:15.595 | 70.00th=[11076], 80.00th=[13042], 90.00th=[14877], 95.00th=[16712], 00:12:15.595 | 99.00th=[20055], 99.50th=[22414], 99.90th=[23462], 99.95th=[23462], 00:12:15.595 | 99.99th=[23725] 00:12:15.595 bw ( KiB/s): min=24576, max=28672, per=35.03%, avg=26624.00, stdev=2896.31, samples=2 00:12:15.595 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:12:15.595 lat (msec) : 2=0.16%, 4=1.69%, 10=56.06%, 20=40.36%, 50=1.24% 00:12:15.595 lat (msec) : 100=0.49% 00:12:15.595 cpu : usr=3.55%, sys=6.42%, ctx=551, majf=0, minf=1 00:12:15.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:15.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.595 issued rwts: total=6330,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.595 00:12:15.596 Run status group 0 (all jobs): 00:12:15.596 READ: bw=70.0MiB/s (73.4MB/s), 14.3MiB/s-23.7MiB/s (15.0MB/s-24.8MB/s), io=73.1MiB (76.6MB), run=1005-1044msec 00:12:15.596 WRITE: bw=74.2MiB/s (77.8MB/s), 15.9MiB/s-24.9MiB/s (16.7MB/s-26.1MB/s), io=77.5MiB (81.2MB), run=1005-1044msec 00:12:15.596 00:12:15.596 Disk stats (read/write): 00:12:15.596 nvme0n1: ios=3617/3778, merge=0/0, ticks=53894/48482, in_queue=102376, util=98.90% 00:12:15.596 nvme0n2: ios=3658/4096, merge=0/0, ticks=28560/20491, in_queue=49051, util=88.58% 00:12:15.596 nvme0n3: ios=3063/3078, merge=0/0, ticks=46195/49583, in_queue=95778, util=96.53% 00:12:15.596 nvme0n4: ios=5517/5632, merge=0/0, ticks=51614/44765, in_queue=96379, util=98.83% 00:12:15.596 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:15.596 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1317537 00:12:15.596 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:15.596 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:15.596 [global] 00:12:15.596 thread=1 00:12:15.596 invalidate=1 00:12:15.596 rw=read 00:12:15.596 time_based=1 00:12:15.596 runtime=10 00:12:15.596 ioengine=libaio 00:12:15.596 direct=1 00:12:15.596 bs=4096 00:12:15.596 iodepth=1 00:12:15.596 norandommap=1 00:12:15.596 numjobs=1 00:12:15.596 00:12:15.596 [job0] 00:12:15.596 filename=/dev/nvme0n1 00:12:15.596 [job1] 00:12:15.596 filename=/dev/nvme0n2 00:12:15.596 [job2] 00:12:15.596 filename=/dev/nvme0n3 00:12:15.596 [job3] 00:12:15.596 filename=/dev/nvme0n4 00:12:15.596 Could not set queue depth (nvme0n1) 00:12:15.596 Could not set queue depth (nvme0n2) 00:12:15.596 Could not set queue depth (nvme0n3) 00:12:15.596 Could not set queue depth (nvme0n4) 00:12:15.860 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.860 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.860 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.860 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.860 fio-3.35 00:12:15.860 Starting 4 threads 00:12:18.406 16:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:18.406 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7331840, buflen=4096 00:12:18.406 fio: pid=1317731, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.406 16:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:18.667 16:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.667 16:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:18.667 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=278528, buflen=4096 00:12:18.667 fio: pid=1317730, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.928 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4816896, buflen=4096 00:12:18.928 fio: pid=1317728, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.928 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.928 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:18.928 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.928 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:18.928 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=983040, buflen=4096 00:12:18.928 fio: pid=1317729, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:19.189 00:12:19.189 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1317728: Thu Jul 25 16:51:39 2024 00:12:19.189 read: IOPS=402, BW=1609KiB/s (1648kB/s)(4704KiB/2923msec) 00:12:19.189 slat (usec): min=7, max=24306, avg=58.24, stdev=829.13 00:12:19.189 clat (usec): min=351, max=42157, avg=2402.88, stdev=7730.91 00:12:19.189 lat (usec): min=377, max=42184, avg=2461.14, stdev=7769.55 00:12:19.189 clat percentiles (usec): 00:12:19.189 | 1.00th=[ 553], 5.00th=[ 685], 10.00th=[ 725], 20.00th=[ 783], 00:12:19.189 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 906], 60.00th=[ 930], 00:12:19.189 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[ 1012], 95.00th=[ 1385], 00:12:19.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:19.189 | 99.99th=[42206] 00:12:19.189 bw ( KiB/s): min= 96, max= 4448, per=30.92%, avg=1299.20, stdev=1902.00, samples=5 00:12:19.189 iops : min= 24, max= 1112, avg=324.80, stdev=475.50, samples=5 00:12:19.189 lat (usec) : 500=0.34%, 750=13.93%, 1000=73.49% 00:12:19.189 lat (msec) : 2=8.41%, 20=0.08%, 50=3.65% 00:12:19.189 cpu : usr=0.38%, sys=1.27%, ctx=1181, majf=0, minf=1 00:12:19.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.189 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.189 issued rwts: total=1177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.189 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1317729: Thu Jul 25 16:51:39 2024 00:12:19.189 read: IOPS=77, BW=308KiB/s (315kB/s)(960KiB/3117msec) 00:12:19.189 slat (usec): min=6, max=32242, avg=260.22, stdev=2347.93 00:12:19.189 clat (usec): min=607, max=42174, avg=12628.67, stdev=18277.67 00:12:19.189 lat (usec): min=631, max=42198, avg=12889.86, stdev=18283.98 00:12:19.189 clat percentiles (usec): 00:12:19.189 | 1.00th=[ 734], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 1254], 00:12:19.189 | 30.00th=[ 1352], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1450], 00:12:19.189 | 70.00th=[ 1647], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:19.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:19.189 | 99.99th=[42206] 00:12:19.189 bw ( KiB/s): min= 96, max= 1221, per=6.74%, avg=283.50, stdev=459.28, samples=6 00:12:19.189 iops : min= 24, max= 305, avg=70.83, stdev=114.72, samples=6 00:12:19.189 lat (usec) : 750=1.66%, 1000=15.35% 00:12:19.189 lat (msec) : 2=54.77%, 50=27.80% 00:12:19.189 cpu : usr=0.16%, sys=0.16%, ctx=247, majf=0, minf=1 00:12:19.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.190 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.190 issued rwts: total=241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.190 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1317730: Thu Jul 25 16:51:39 2024 00:12:19.190 read: IOPS=24, BW=98.2KiB/s (101kB/s)(272KiB/2770msec) 00:12:19.190 slat (usec): min=25, max=14443, avg=237.35, stdev=1735.50 00:12:19.190 clat (usec): min=1092, max=42311, avg=40169.93, stdev=8394.06 00:12:19.190 lat (usec): min=1118, max=55851, avg=40410.39, stdev=8593.00 00:12:19.190 clat percentiles (usec): 00:12:19.190 | 1.00th=[ 1090], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:12:19.190 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:19.190 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:19.190 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:19.190 | 99.99th=[42206] 00:12:19.190 bw ( KiB/s): min= 96, max= 104, per=2.31%, avg=97.60, stdev= 3.58, samples=5 00:12:19.190 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:12:19.190 lat (msec) : 2=4.35%, 50=94.20% 00:12:19.190 cpu : usr=0.14%, sys=0.00%, ctx=71, majf=0, minf=1 00:12:19.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.190 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.190 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.190 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1317731: Thu Jul 25 16:51:39 2024 00:12:19.190 read: IOPS=691, BW=2764KiB/s (2831kB/s)(7160KiB/2590msec) 00:12:19.190 slat (nsec): min=8830, max=67976, avg=25844.34, stdev=5620.96 00:12:19.190 clat (usec): min=957, max=3964, avg=1399.84, stdev=108.56 00:12:19.190 lat (usec): min=984, max=3994, avg=1425.69, stdev=109.46 00:12:19.190 clat percentiles (usec): 00:12:19.190 | 1.00th=[ 1139], 5.00th=[ 1254], 10.00th=[ 1303], 20.00th=[ 1336], 00:12:19.190 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[ 1418], 60.00th=[ 1418], 00:12:19.190 | 70.00th=[ 1434], 80.00th=[ 1467], 90.00th=[ 1483], 95.00th=[ 1516], 00:12:19.190 | 99.00th=[ 1549], 99.50th=[ 1565], 99.90th=[ 3228], 99.95th=[ 3949], 00:12:19.190 | 99.99th=[ 3949] 00:12:19.190 bw ( KiB/s): min= 2760, max= 2840, per=66.55%, avg=2796.80, stdev=28.62, samples=5 00:12:19.190 iops : min= 690, max= 710, avg=699.20, stdev= 7.16, samples=5 00:12:19.190 lat (usec) : 1000=0.06% 00:12:19.190 lat (msec) : 2=99.78%, 4=0.11% 00:12:19.190 cpu : usr=1.20%, sys=2.67%, ctx=1793, majf=0, minf=2 00:12:19.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.190 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.190 issued rwts: total=1791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.190 00:12:19.190 Run status group 0 (all jobs): 00:12:19.190 READ: bw=4201KiB/s (4302kB/s), 98.2KiB/s-2764KiB/s (101kB/s-2831kB/s), io=12.8MiB (13.4MB), run=2590-3117msec 00:12:19.190 00:12:19.190 Disk stats (read/write): 00:12:19.190 nvme0n1: ios=1142/0, merge=0/0, ticks=3694/0, in_queue=3694, util=98.06% 00:12:19.190 nvme0n2: ios=239/0, merge=0/0, ticks=2984/0, in_queue=2984, util=93.99% 00:12:19.190 nvme0n3: ios=63/0, merge=0/0, ticks=2565/0, in_queue=2565, util=96.03% 00:12:19.190 nvme0n4: ios=1677/0, merge=0/0, ticks=3294/0, in_queue=3294, util=99.47% 00:12:19.190 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.190 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:19.450 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.450 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:19.450 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.450 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:19.710 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.710 16:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1317537 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:19.972 nvmf hotplug test: fio failed as expected 00:12:19.972 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.256 rmmod nvme_tcp 00:12:20.256 rmmod nvme_fabrics 00:12:20.256 rmmod nvme_keyring 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1314025 ']' 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1314025 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1314025 ']' 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1314025 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1314025 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1314025' 00:12:20.256 killing process with pid 1314025 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1314025 00:12:20.256 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1314025 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.523 16:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.438 00:12:22.438 real 0m28.174s 00:12:22.438 user 2m37.186s 00:12:22.438 sys 0m8.926s 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.438 ************************************ 00:12:22.438 END TEST nvmf_fio_target 00:12:22.438 ************************************ 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.438 16:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:22.699 ************************************ 00:12:22.699 START TEST nvmf_bdevio 00:12:22.699 ************************************ 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:22.699 * Looking for test storage... 00:12:22.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.699 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.700 16:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:30.853 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:30.853 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:30.853 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:30.853 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.853 16:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:12:30.854 00:12:30.854 --- 10.0.0.2 ping statistics --- 00:12:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.854 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.482 ms 00:12:30.854 00:12:30.854 --- 10.0.0.1 ping statistics --- 00:12:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.854 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1322790 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1322790 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1322790 ']' 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 [2024-07-25 16:51:50.156537] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:12:30.854 [2024-07-25 16:51:50.156611] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.854 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.854 [2024-07-25 16:51:50.247964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.854 [2024-07-25 16:51:50.343157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.854 [2024-07-25 16:51:50.343228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.854 [2024-07-25 16:51:50.343237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.854 [2024-07-25 16:51:50.343250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.854 [2024-07-25 16:51:50.343256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.854 [2024-07-25 16:51:50.343434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:30.854 [2024-07-25 16:51:50.343598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:30.854 [2024-07-25 16:51:50.343756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.854 [2024-07-25 16:51:50.343757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.854 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 [2024-07-25 16:51:50.997028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 Malloc0 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 [2024-07-25 16:51:51.062738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.854 { 00:12:30.854 "params": { 00:12:30.854 "name": "Nvme$subsystem", 00:12:30.854 "trtype": "$TEST_TRANSPORT", 00:12:30.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.854 "adrfam": "ipv4", 00:12:30.854 "trsvcid": "$NVMF_PORT", 00:12:30.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.854 "hdgst": ${hdgst:-false}, 00:12:30.854 "ddgst": ${ddgst:-false} 00:12:30.854 }, 00:12:30.854 "method": "bdev_nvme_attach_controller" 00:12:30.854 } 00:12:30.854 EOF 00:12:30.854 )") 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:30.854 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.854 "params": { 00:12:30.854 "name": "Nvme1", 00:12:30.854 "trtype": "tcp", 00:12:30.854 "traddr": "10.0.0.2", 00:12:30.854 "adrfam": "ipv4", 00:12:30.854 "trsvcid": "4420", 00:12:30.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.854 "hdgst": false, 00:12:30.854 "ddgst": false 00:12:30.854 }, 00:12:30.854 "method": "bdev_nvme_attach_controller" 00:12:30.854 }' 00:12:31.116 [2024-07-25 16:51:51.127382] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:12:31.116 [2024-07-25 16:51:51.127465] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323108 ] 00:12:31.116 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.116 [2024-07-25 16:51:51.195346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.116 [2024-07-25 16:51:51.270711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.116 [2024-07-25 16:51:51.270828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.116 [2024-07-25 16:51:51.270832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.377 I/O targets: 00:12:31.378 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:31.378 00:12:31.378 00:12:31.378 CUnit - A unit testing framework for C - Version 2.1-3 00:12:31.378 http://cunit.sourceforge.net/ 00:12:31.378 00:12:31.378 00:12:31.378 Suite: bdevio tests on: Nvme1n1 00:12:31.378 Test: blockdev write read block ...passed 00:12:31.378 Test: blockdev write zeroes read block ...passed 00:12:31.378 Test: blockdev write zeroes read no split ...passed 00:12:31.378 Test: blockdev write zeroes read split ...passed 00:12:31.378 Test: blockdev write zeroes read split partial ...passed 00:12:31.378 Test: blockdev reset ...[2024-07-25 16:51:51.646722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:31.378 [2024-07-25 16:51:51.646791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6ce0 (9): Bad file descriptor 00:12:31.638 [2024-07-25 16:51:51.713177] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:31.638 passed 00:12:31.638 Test: blockdev write read 8 blocks ...passed 00:12:31.638 Test: blockdev write read size > 128k ...passed 00:12:31.638 Test: blockdev write read invalid size ...passed 00:12:31.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.638 Test: blockdev write read max offset ...passed 00:12:31.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.638 Test: blockdev writev readv 8 blocks ...passed 00:12:31.638 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.638 Test: blockdev writev readv block ...passed 00:12:31.898 Test: blockdev writev readv size > 128k ...passed 00:12:31.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.898 Test: blockdev comparev and writev ...[2024-07-25 16:51:51.947513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.947538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.947549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.947559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.948126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.948135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.948144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.948681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.948689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.948698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.948703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.949354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.949362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:51.949371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.898 [2024-07-25 16:51:51.949376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:31.898 passed 00:12:31.898 Test: blockdev nvme passthru rw ...passed 00:12:31.898 Test: blockdev nvme passthru vendor specific ...[2024-07-25 16:51:52.034406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.898 [2024-07-25 16:51:52.034418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:52.034848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.898 [2024-07-25 16:51:52.034855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:52.035255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.898 [2024-07-25 16:51:52.035262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:31.898 [2024-07-25 16:51:52.035616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.898 [2024-07-25 16:51:52.035623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:31.898 passed 00:12:31.898 Test: blockdev nvme admin passthru ...passed 00:12:31.898 Test: blockdev copy ...passed 00:12:31.898 00:12:31.898 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.898 suites 1 1 n/a 0 0 00:12:31.898 tests 23 23 23 0 0 00:12:31.898 asserts 152 152 152 0 n/a 00:12:31.898 00:12:31.898 Elapsed time = 1.358 seconds 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.158 rmmod nvme_tcp 00:12:32.158 rmmod nvme_fabrics 00:12:32.158 rmmod nvme_keyring 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1322790 ']' 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1322790 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1322790 ']' 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1322790 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1322790 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1322790' 00:12:32.158 killing process with pid 1322790 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1322790 00:12:32.158 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1322790 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.419 16:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.332 16:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:34.332 00:12:34.332 real 0m11.850s 00:12:34.332 user 0m12.806s 00:12:34.332 sys 0m5.937s 00:12:34.332 16:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.332 16:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.332 ************************************ 00:12:34.332 END TEST nvmf_bdevio 00:12:34.332 ************************************ 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:34.594 00:12:34.594 real 4m54.505s 00:12:34.594 user 11m39.206s 00:12:34.594 sys 1m44.312s 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:34.594 ************************************ 00:12:34.594 END TEST nvmf_target_core 00:12:34.594 ************************************ 00:12:34.594 16:51:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:34.594 16:51:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.594 16:51:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.594 16:51:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.594 ************************************ 00:12:34.594 START TEST nvmf_target_extra 00:12:34.594 ************************************ 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:34.594 * Looking for test storage... 00:12:34.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.594 16:51:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.595 ************************************ 00:12:34.595 START TEST nvmf_example 00:12:34.595 ************************************ 00:12:34.595 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:34.857 * Looking for test storage... 00:12:34.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.857 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:34.858 16:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.004 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:43.005 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:43.005 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:43.005 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:43.005 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.005 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:43.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.832 ms 00:12:43.005 00:12:43.005 --- 10.0.0.2 ping statistics --- 00:12:43.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.005 rtt min/avg/max/mdev = 0.832/0.832/0.832/0.000 ms 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:12:43.005 00:12:43.005 --- 10.0.0.1 ping statistics --- 00:12:43.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.005 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1327500 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1327500 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1327500 ']' 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.005 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:43.006 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:43.006 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.255 Initializing NVMe Controllers 00:12:55.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:55.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:55.256 Initialization complete. Launching workers. 00:12:55.256 ======================================================== 00:12:55.256 Latency(us) 00:12:55.256 Device Information : IOPS MiB/s Average min max 00:12:55.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18123.24 70.79 3532.50 860.84 15300.39 00:12:55.256 ======================================================== 00:12:55.256 Total : 18123.24 70.79 3532.50 860.84 15300.39 00:12:55.256 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.256 rmmod nvme_tcp 00:12:55.256 rmmod nvme_fabrics 00:12:55.256 rmmod nvme_keyring 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1327500 ']' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1327500 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1327500 ']' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1327500 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1327500 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1327500' 00:12:55.256 killing process with pid 1327500 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1327500 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1327500 00:12:55.256 nvmf threads initialize successfully 00:12:55.256 bdev subsystem init successfully 00:12:55.256 created a nvmf target service 00:12:55.256 create targets's poll groups done 00:12:55.256 all subsystems of target started 00:12:55.256 nvmf target is running 00:12:55.256 all subsystems of target stopped 00:12:55.256 destroy targets's poll groups done 00:12:55.256 destroyed the nvmf target service 00:12:55.256 bdev subsystem finish successfully 00:12:55.256 nvmf threads destroy successfully 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.256 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.517 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.517 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:55.517 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.517 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 00:12:55.517 real 0m20.945s 00:12:55.517 user 0m46.494s 00:12:55.517 sys 0m6.432s 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:55.780 ************************************ 00:12:55.780 END TEST nvmf_example 00:12:55.780 ************************************ 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.780 ************************************ 00:12:55.780 START TEST nvmf_filesystem 00:12:55.780 ************************************ 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:55.780 * Looking for test storage... 00:12:55.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:55.780 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:55.781 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:55.781 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:55.781 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:55.781 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:55.781 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:55.781 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:55.781 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:55.782 #define SPDK_CONFIG_H 00:12:55.782 #define SPDK_CONFIG_APPS 1 00:12:55.782 #define SPDK_CONFIG_ARCH native 00:12:55.782 #undef SPDK_CONFIG_ASAN 00:12:55.782 #undef SPDK_CONFIG_AVAHI 00:12:55.782 #undef SPDK_CONFIG_CET 00:12:55.782 #define SPDK_CONFIG_COVERAGE 1 00:12:55.782 #define SPDK_CONFIG_CROSS_PREFIX 00:12:55.782 #undef SPDK_CONFIG_CRYPTO 00:12:55.782 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:55.782 #undef SPDK_CONFIG_CUSTOMOCF 00:12:55.782 #undef SPDK_CONFIG_DAOS 00:12:55.782 #define SPDK_CONFIG_DAOS_DIR 00:12:55.782 #define SPDK_CONFIG_DEBUG 1 00:12:55.782 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:55.782 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:55.782 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:55.782 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:55.782 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:55.782 #undef SPDK_CONFIG_DPDK_UADK 00:12:55.782 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:55.782 #define SPDK_CONFIG_EXAMPLES 1 00:12:55.782 #undef SPDK_CONFIG_FC 00:12:55.782 #define SPDK_CONFIG_FC_PATH 00:12:55.782 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:55.782 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:55.782 #undef SPDK_CONFIG_FUSE 00:12:55.782 #undef SPDK_CONFIG_FUZZER 00:12:55.782 #define SPDK_CONFIG_FUZZER_LIB 00:12:55.782 #undef SPDK_CONFIG_GOLANG 00:12:55.782 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:55.782 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:55.782 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:55.782 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:55.782 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:55.782 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:55.782 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:55.782 #define SPDK_CONFIG_IDXD 1 00:12:55.782 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:55.782 #undef SPDK_CONFIG_IPSEC_MB 00:12:55.782 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:55.782 #define SPDK_CONFIG_ISAL 1 00:12:55.782 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:55.782 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:55.782 #define SPDK_CONFIG_LIBDIR 00:12:55.782 #undef SPDK_CONFIG_LTO 00:12:55.782 #define SPDK_CONFIG_MAX_LCORES 128 00:12:55.782 #define SPDK_CONFIG_NVME_CUSE 1 00:12:55.782 #undef SPDK_CONFIG_OCF 00:12:55.782 #define SPDK_CONFIG_OCF_PATH 00:12:55.782 #define SPDK_CONFIG_OPENSSL_PATH 00:12:55.782 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:55.782 #define SPDK_CONFIG_PGO_DIR 00:12:55.782 #undef SPDK_CONFIG_PGO_USE 00:12:55.782 #define SPDK_CONFIG_PREFIX /usr/local 00:12:55.782 #undef SPDK_CONFIG_RAID5F 00:12:55.782 #undef SPDK_CONFIG_RBD 00:12:55.782 #define SPDK_CONFIG_RDMA 1 00:12:55.782 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:55.782 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:55.782 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:55.782 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:55.782 #define SPDK_CONFIG_SHARED 1 00:12:55.782 #undef SPDK_CONFIG_SMA 00:12:55.782 #define SPDK_CONFIG_TESTS 1 00:12:55.782 #undef SPDK_CONFIG_TSAN 00:12:55.782 #define SPDK_CONFIG_UBLK 1 00:12:55.782 #define SPDK_CONFIG_UBSAN 1 00:12:55.782 #undef SPDK_CONFIG_UNIT_TESTS 00:12:55.782 #undef SPDK_CONFIG_URING 00:12:55.782 #define SPDK_CONFIG_URING_PATH 00:12:55.782 #undef SPDK_CONFIG_URING_ZNS 00:12:55.782 #undef SPDK_CONFIG_USDT 00:12:55.782 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:55.782 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:55.782 #define SPDK_CONFIG_VFIO_USER 1 00:12:55.782 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:55.782 #define SPDK_CONFIG_VHOST 1 00:12:55.782 #define SPDK_CONFIG_VIRTIO 1 00:12:55.782 #undef SPDK_CONFIG_VTUNE 00:12:55.782 #define SPDK_CONFIG_VTUNE_DIR 00:12:55.782 #define SPDK_CONFIG_WERROR 1 00:12:55.782 #define SPDK_CONFIG_WPDK_DIR 00:12:55.782 #undef SPDK_CONFIG_XNVME 00:12:55.782 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:55.782 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:55.783 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:56.047 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j144 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1330296 ]] 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1330296 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.EpLmld 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EpLmld/tests/target /tmp/spdk.EpLmld 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=954236928 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330192896 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=118571278336 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=129370976256 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10799697920 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64623304704 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685486080 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:56.048 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=25850851328 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=25874198528 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23347200 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=efivarfs 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=efivarfs 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=216064 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=507904 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=287744 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64683663360 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685490176 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1826816 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12937093120 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12937097216 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:56.049 * Looking for test storage... 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=118571278336 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13014290432 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.049 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:56.050 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:04.200 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:04.200 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:04.200 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:04.200 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.200 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:13:04.201 00:13:04.201 --- 10.0.0.2 ping statistics --- 00:13:04.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.201 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:13:04.201 00:13:04.201 --- 10.0.0.1 ping statistics --- 00:13:04.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.201 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.201 ************************************ 00:13:04.201 START TEST nvmf_filesystem_no_in_capsule 00:13:04.201 ************************************ 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1334156 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1334156 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1334156 ']' 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.201 16:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.201 [2024-07-25 16:52:23.699406] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:13:04.201 [2024-07-25 16:52:23.699468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.201 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.201 [2024-07-25 16:52:23.773262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.201 [2024-07-25 16:52:23.848597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.201 [2024-07-25 16:52:23.848638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.201 [2024-07-25 16:52:23.848646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.201 [2024-07-25 16:52:23.848652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.201 [2024-07-25 16:52:23.848658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.201 [2024-07-25 16:52:23.848808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.201 [2024-07-25 16:52:23.848936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.201 [2024-07-25 16:52:23.849093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.201 [2024-07-25 16:52:23.849094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 [2024-07-25 16:52:24.527164] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 Malloc1 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.463 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.464 [2024-07-25 16:52:24.657403] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:04.464 { 00:13:04.464 "name": "Malloc1", 00:13:04.464 "aliases": [ 00:13:04.464 "f8b68dbd-2ea3-4d96-b7fe-f603fb5a7b2a" 00:13:04.464 ], 00:13:04.464 "product_name": "Malloc disk", 00:13:04.464 "block_size": 512, 00:13:04.464 "num_blocks": 1048576, 00:13:04.464 "uuid": "f8b68dbd-2ea3-4d96-b7fe-f603fb5a7b2a", 00:13:04.464 "assigned_rate_limits": { 00:13:04.464 "rw_ios_per_sec": 0, 00:13:04.464 "rw_mbytes_per_sec": 0, 00:13:04.464 "r_mbytes_per_sec": 0, 00:13:04.464 "w_mbytes_per_sec": 0 00:13:04.464 }, 00:13:04.464 "claimed": true, 00:13:04.464 "claim_type": "exclusive_write", 00:13:04.464 "zoned": false, 00:13:04.464 "supported_io_types": { 00:13:04.464 "read": true, 00:13:04.464 "write": true, 00:13:04.464 "unmap": true, 00:13:04.464 "flush": true, 00:13:04.464 "reset": true, 00:13:04.464 "nvme_admin": false, 00:13:04.464 "nvme_io": false, 00:13:04.464 "nvme_io_md": false, 00:13:04.464 "write_zeroes": true, 00:13:04.464 "zcopy": true, 00:13:04.464 "get_zone_info": false, 00:13:04.464 "zone_management": false, 00:13:04.464 "zone_append": false, 00:13:04.464 "compare": false, 00:13:04.464 "compare_and_write": false, 00:13:04.464 "abort": true, 00:13:04.464 "seek_hole": false, 00:13:04.464 "seek_data": false, 00:13:04.464 "copy": true, 00:13:04.464 "nvme_iov_md": false 00:13:04.464 }, 00:13:04.464 "memory_domains": [ 00:13:04.464 { 00:13:04.464 "dma_device_id": "system", 00:13:04.464 "dma_device_type": 1 00:13:04.464 }, 00:13:04.464 { 00:13:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.464 "dma_device_type": 2 00:13:04.464 } 00:13:04.464 ], 00:13:04.464 "driver_specific": {} 00:13:04.464 } 00:13:04.464 ]' 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:04.464 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:04.725 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:04.725 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:04.725 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:04.725 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:04.725 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.112 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.112 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.112 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.112 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.112 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.030 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.030 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.030 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.030 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.030 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.030 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:08.291 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:08.552 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:09.496 16:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.441 ************************************ 00:13:10.441 START TEST filesystem_ext4 00:13:10.441 ************************************ 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:10.441 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:10.441 mke2fs 1.46.5 (30-Dec-2021) 00:13:10.441 Discarding device blocks: 0/522240 done 00:13:10.441 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:10.441 Filesystem UUID: cfb4f0e4-334a-4654-9c6f-cd6f8e9ea9f1 00:13:10.441 Superblock backups stored on blocks: 00:13:10.441 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:10.441 00:13:10.441 Allocating group tables: 0/64 done 00:13:10.441 Writing inode tables: 0/64 done 00:13:10.702 Creating journal (8192 blocks): done 00:13:10.702 Writing superblocks and filesystem accounting information: 0/64 done 00:13:10.702 00:13:10.702 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:10.702 16:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1334156 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:10.963 00:13:10.963 real 0m0.566s 00:13:10.963 user 0m0.024s 00:13:10.963 sys 0m0.077s 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:10.963 ************************************ 00:13:10.963 END TEST filesystem_ext4 00:13:10.963 ************************************ 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.963 ************************************ 00:13:10.963 START TEST filesystem_btrfs 00:13:10.963 ************************************ 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:10.963 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:11.534 btrfs-progs v6.6.2 00:13:11.534 See https://btrfs.readthedocs.io for more information. 00:13:11.534 00:13:11.534 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:11.534 NOTE: several default settings have changed in version 5.15, please make sure 00:13:11.534 this does not affect your deployments: 00:13:11.534 - DUP for metadata (-m dup) 00:13:11.534 - enabled no-holes (-O no-holes) 00:13:11.534 - enabled free-space-tree (-R free-space-tree) 00:13:11.534 00:13:11.534 Label: (null) 00:13:11.534 UUID: 154eabfc-1144-4a37-898d-7638d1ecbeb0 00:13:11.534 Node size: 16384 00:13:11.534 Sector size: 4096 00:13:11.534 Filesystem size: 510.00MiB 00:13:11.534 Block group profiles: 00:13:11.534 Data: single 8.00MiB 00:13:11.534 Metadata: DUP 32.00MiB 00:13:11.534 System: DUP 8.00MiB 00:13:11.534 SSD detected: yes 00:13:11.534 Zoned device: no 00:13:11.534 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:11.534 Runtime features: free-space-tree 00:13:11.534 Checksum: crc32c 00:13:11.534 Number of devices: 1 00:13:11.534 Devices: 00:13:11.534 ID SIZE PATH 00:13:11.534 1 510.00MiB /dev/nvme0n1p1 00:13:11.534 00:13:11.534 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:11.534 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1334156 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:11.795 00:13:11.795 real 0m0.765s 00:13:11.795 user 0m0.040s 00:13:11.795 sys 0m0.117s 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.795 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:11.795 ************************************ 00:13:11.795 END TEST filesystem_btrfs 00:13:11.795 ************************************ 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.795 ************************************ 00:13:11.795 START TEST filesystem_xfs 00:13:11.795 ************************************ 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:11.795 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:12.056 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:12.056 = sectsz=512 attr=2, projid32bit=1 00:13:12.056 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:12.056 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:12.056 data = bsize=4096 blocks=130560, imaxpct=25 00:13:12.056 = sunit=0 swidth=0 blks 00:13:12.056 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:12.056 log =internal log bsize=4096 blocks=16384, version=2 00:13:12.056 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:12.056 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:12.999 Discarding blocks...Done. 00:13:12.999 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:12.999 16:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1334156 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:15.546 00:13:15.546 real 0m3.630s 00:13:15.546 user 0m0.026s 00:13:15.546 sys 0m0.076s 00:13:15.546 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.547 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:15.547 ************************************ 00:13:15.547 END TEST filesystem_xfs 00:13:15.547 ************************************ 00:13:15.547 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:15.839 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1334156 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1334156 ']' 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1334156 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334156 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334156' 00:13:15.840 killing process with pid 1334156 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1334156 00:13:15.840 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1334156 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:16.101 00:13:16.101 real 0m12.674s 00:13:16.101 user 0m49.910s 00:13:16.101 sys 0m1.214s 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.101 ************************************ 00:13:16.101 END TEST nvmf_filesystem_no_in_capsule 00:13:16.101 ************************************ 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.101 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:16.362 ************************************ 00:13:16.362 START TEST nvmf_filesystem_in_capsule 00:13:16.363 ************************************ 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1336790 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1336790 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1336790 ']' 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.363 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.363 [2024-07-25 16:52:36.453385] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:13:16.363 [2024-07-25 16:52:36.453436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.363 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.363 [2024-07-25 16:52:36.523037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.363 [2024-07-25 16:52:36.596286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.363 [2024-07-25 16:52:36.596323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.363 [2024-07-25 16:52:36.596330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.363 [2024-07-25 16:52:36.596338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.363 [2024-07-25 16:52:36.596344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.363 [2024-07-25 16:52:36.596483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.363 [2024-07-25 16:52:36.596599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.363 [2024-07-25 16:52:36.596733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.363 [2024-07-25 16:52:36.596735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 [2024-07-25 16:52:37.276188] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 [2024-07-25 16:52:37.404013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:17.309 { 00:13:17.309 "name": "Malloc1", 00:13:17.309 "aliases": [ 00:13:17.309 "ed4cb31d-2532-4381-a7c9-5d39d2f2cb73" 00:13:17.309 ], 00:13:17.309 "product_name": "Malloc disk", 00:13:17.309 "block_size": 512, 00:13:17.309 "num_blocks": 1048576, 00:13:17.309 "uuid": "ed4cb31d-2532-4381-a7c9-5d39d2f2cb73", 00:13:17.309 "assigned_rate_limits": { 00:13:17.309 "rw_ios_per_sec": 0, 00:13:17.309 "rw_mbytes_per_sec": 0, 00:13:17.309 "r_mbytes_per_sec": 0, 00:13:17.309 "w_mbytes_per_sec": 0 00:13:17.309 }, 00:13:17.309 "claimed": true, 00:13:17.309 "claim_type": "exclusive_write", 00:13:17.309 "zoned": false, 00:13:17.309 "supported_io_types": { 00:13:17.309 "read": true, 00:13:17.309 "write": true, 00:13:17.309 "unmap": true, 00:13:17.309 "flush": true, 00:13:17.309 "reset": true, 00:13:17.309 "nvme_admin": false, 00:13:17.309 "nvme_io": false, 00:13:17.309 "nvme_io_md": false, 00:13:17.309 "write_zeroes": true, 00:13:17.309 "zcopy": true, 00:13:17.309 "get_zone_info": false, 00:13:17.309 "zone_management": false, 00:13:17.309 "zone_append": false, 00:13:17.309 "compare": false, 00:13:17.309 "compare_and_write": false, 00:13:17.309 "abort": true, 00:13:17.309 "seek_hole": false, 00:13:17.309 "seek_data": false, 00:13:17.309 "copy": true, 00:13:17.309 "nvme_iov_md": false 00:13:17.309 }, 00:13:17.309 "memory_domains": [ 00:13:17.309 { 00:13:17.309 "dma_device_id": "system", 00:13:17.309 "dma_device_type": 1 00:13:17.309 }, 00:13:17.309 { 00:13:17.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.309 "dma_device_type": 2 00:13:17.309 } 00:13:17.309 ], 00:13:17.309 "driver_specific": {} 00:13:17.309 } 00:13:17.309 ]' 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:17.309 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.223 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.223 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.223 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.223 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:19.223 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:21.158 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:21.419 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:21.681 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.068 ************************************ 00:13:23.068 START TEST filesystem_in_capsule_ext4 00:13:23.068 ************************************ 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:23.068 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:23.068 mke2fs 1.46.5 (30-Dec-2021) 00:13:23.068 Discarding device blocks: 0/522240 done 00:13:23.068 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:23.068 Filesystem UUID: aaf88190-e8d3-4b85-9a25-cfa27c0bad99 00:13:23.068 Superblock backups stored on blocks: 00:13:23.068 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:23.068 00:13:23.068 Allocating group tables: 0/64 done 00:13:23.068 Writing inode tables: 0/64 done 00:13:23.068 Creating journal (8192 blocks): done 00:13:23.068 Writing superblocks and filesystem accounting information: 0/64 done 00:13:23.068 00:13:23.068 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:23.068 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1336790 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:23.329 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:23.589 00:13:23.589 real 0m0.634s 00:13:23.589 user 0m0.029s 00:13:23.589 sys 0m0.067s 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:23.589 ************************************ 00:13:23.589 END TEST filesystem_in_capsule_ext4 00:13:23.589 ************************************ 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.589 ************************************ 00:13:23.589 START TEST filesystem_in_capsule_btrfs 00:13:23.589 ************************************ 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:23.589 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:23.590 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:23.851 btrfs-progs v6.6.2 00:13:23.851 See https://btrfs.readthedocs.io for more information. 00:13:23.851 00:13:23.851 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:23.851 NOTE: several default settings have changed in version 5.15, please make sure 00:13:23.851 this does not affect your deployments: 00:13:23.851 - DUP for metadata (-m dup) 00:13:23.851 - enabled no-holes (-O no-holes) 00:13:23.851 - enabled free-space-tree (-R free-space-tree) 00:13:23.851 00:13:23.851 Label: (null) 00:13:23.851 UUID: 85b728a5-415a-4f64-9553-d26f8df103ac 00:13:23.851 Node size: 16384 00:13:23.851 Sector size: 4096 00:13:23.851 Filesystem size: 510.00MiB 00:13:23.851 Block group profiles: 00:13:23.851 Data: single 8.00MiB 00:13:23.851 Metadata: DUP 32.00MiB 00:13:23.851 System: DUP 8.00MiB 00:13:23.851 SSD detected: yes 00:13:23.851 Zoned device: no 00:13:23.851 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:23.851 Runtime features: free-space-tree 00:13:23.851 Checksum: crc32c 00:13:23.851 Number of devices: 1 00:13:23.851 Devices: 00:13:23.851 ID SIZE PATH 00:13:23.851 1 510.00MiB /dev/nvme0n1p1 00:13:23.851 00:13:23.851 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:23.851 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:24.422 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:24.422 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1336790 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:24.423 00:13:24.423 real 0m0.771s 00:13:24.423 user 0m0.023s 00:13:24.423 sys 0m0.137s 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:24.423 ************************************ 00:13:24.423 END TEST filesystem_in_capsule_btrfs 00:13:24.423 ************************************ 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.423 ************************************ 00:13:24.423 START TEST filesystem_in_capsule_xfs 00:13:24.423 ************************************ 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:24.423 16:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:24.423 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:24.423 = sectsz=512 attr=2, projid32bit=1 00:13:24.423 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:24.423 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:24.423 data = bsize=4096 blocks=130560, imaxpct=25 00:13:24.423 = sunit=0 swidth=0 blks 00:13:24.423 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:24.423 log =internal log bsize=4096 blocks=16384, version=2 00:13:24.423 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:24.423 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:25.366 Discarding blocks...Done. 00:13:25.366 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:25.366 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:27.913 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:27.913 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:27.913 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:27.913 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:27.913 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1336790 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:27.914 00:13:27.914 real 0m3.511s 00:13:27.914 user 0m0.026s 00:13:27.914 sys 0m0.077s 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:27.914 ************************************ 00:13:27.914 END TEST filesystem_in_capsule_xfs 00:13:27.914 ************************************ 00:13:27.914 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:28.175 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1336790 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1336790 ']' 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1336790 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.747 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1336790 00:13:28.747 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.747 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.747 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1336790' 00:13:28.747 killing process with pid 1336790 00:13:28.747 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1336790 00:13:28.747 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1336790 00:13:29.009 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:29.009 00:13:29.009 real 0m12.855s 00:13:29.009 user 0m50.663s 00:13:29.009 sys 0m1.199s 00:13:29.009 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.009 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.009 ************************************ 00:13:29.009 END TEST nvmf_filesystem_in_capsule 00:13:29.009 ************************************ 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.271 rmmod nvme_tcp 00:13:29.271 rmmod nvme_fabrics 00:13:29.271 rmmod nvme_keyring 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.271 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.189 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:31.189 00:13:31.189 real 0m35.572s 00:13:31.189 user 1m42.823s 00:13:31.189 sys 0m8.132s 00:13:31.189 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.189 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.189 ************************************ 00:13:31.189 END TEST nvmf_filesystem 00:13:31.189 ************************************ 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.450 ************************************ 00:13:31.450 START TEST nvmf_target_discovery 00:13:31.450 ************************************ 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:31.450 * Looking for test storage... 00:13:31.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:31.450 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:39.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:39.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.663 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:39.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:39.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:13:39.664 00:13:39.664 --- 10.0.0.2 ping statistics --- 00:13:39.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.664 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:13:39.664 00:13:39.664 --- 10.0.0.1 ping statistics --- 00:13:39.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.664 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1343668 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1343668 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1343668 ']' 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.664 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 [2024-07-25 16:52:58.884772] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:13:39.664 [2024-07-25 16:52:58.884837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.664 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.664 [2024-07-25 16:52:58.956286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.664 [2024-07-25 16:52:59.031650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.664 [2024-07-25 16:52:59.031688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.664 [2024-07-25 16:52:59.031696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.664 [2024-07-25 16:52:59.031702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.664 [2024-07-25 16:52:59.031708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.664 [2024-07-25 16:52:59.031853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.664 [2024-07-25 16:52:59.031966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.664 [2024-07-25 16:52:59.032121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.664 [2024-07-25 16:52:59.032122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 [2024-07-25 16:52:59.706096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 Null1 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.664 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 [2024-07-25 16:52:59.766391] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 Null2 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 Null3 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 Null4 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.665 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.927 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.927 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:39.927 00:13:39.927 Discovery Log Number of Records 6, Generation counter 6 00:13:39.927 =====Discovery Log Entry 0====== 00:13:39.927 trtype: tcp 00:13:39.927 adrfam: ipv4 00:13:39.927 subtype: current discovery subsystem 00:13:39.927 treq: not required 00:13:39.927 portid: 0 00:13:39.927 trsvcid: 4420 00:13:39.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:39.927 traddr: 10.0.0.2 00:13:39.927 eflags: explicit discovery connections, duplicate discovery information 00:13:39.927 sectype: none 00:13:39.927 =====Discovery Log Entry 1====== 00:13:39.927 trtype: tcp 00:13:39.927 adrfam: ipv4 00:13:39.927 subtype: nvme subsystem 00:13:39.927 treq: not required 00:13:39.927 portid: 0 00:13:39.927 trsvcid: 4420 00:13:39.927 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:39.927 traddr: 10.0.0.2 00:13:39.927 eflags: none 00:13:39.927 sectype: none 00:13:39.927 =====Discovery Log Entry 2====== 00:13:39.927 trtype: tcp 00:13:39.927 adrfam: ipv4 00:13:39.927 subtype: nvme subsystem 00:13:39.927 treq: not required 00:13:39.927 portid: 0 00:13:39.927 trsvcid: 4420 00:13:39.927 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:39.927 traddr: 10.0.0.2 00:13:39.927 eflags: none 00:13:39.927 sectype: none 00:13:39.927 =====Discovery Log Entry 3====== 00:13:39.927 trtype: tcp 00:13:39.927 adrfam: ipv4 00:13:39.927 subtype: nvme subsystem 00:13:39.927 treq: not required 00:13:39.927 portid: 0 00:13:39.927 trsvcid: 4420 00:13:39.927 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:39.927 traddr: 10.0.0.2 00:13:39.927 eflags: none 00:13:39.927 sectype: none 00:13:39.927 =====Discovery Log Entry 4====== 00:13:39.927 trtype: tcp 00:13:39.927 adrfam: ipv4 00:13:39.927 subtype: nvme subsystem 00:13:39.927 treq: not required 00:13:39.927 portid: 0 00:13:39.927 trsvcid: 4420 00:13:39.927 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:39.927 traddr: 10.0.0.2 00:13:39.927 eflags: none 00:13:39.927 sectype: none 00:13:39.927 =====Discovery Log Entry 5====== 00:13:39.927 trtype: tcp 00:13:39.927 adrfam: ipv4 00:13:39.927 subtype: discovery subsystem referral 00:13:39.927 treq: not required 00:13:39.927 portid: 0 00:13:39.927 trsvcid: 4430 00:13:39.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:39.927 traddr: 10.0.0.2 00:13:39.927 eflags: none 00:13:39.927 sectype: none 00:13:39.927 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:39.927 Perform nvmf subsystem discovery via RPC 00:13:39.927 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:39.927 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 [ 00:13:39.928 { 00:13:39.928 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.928 "subtype": "Discovery", 00:13:39.928 "listen_addresses": [ 00:13:39.928 { 00:13:39.928 "trtype": "TCP", 00:13:39.928 "adrfam": "IPv4", 00:13:39.928 "traddr": "10.0.0.2", 00:13:39.928 "trsvcid": "4420" 00:13:39.928 } 00:13:39.928 ], 00:13:39.928 "allow_any_host": true, 00:13:39.928 "hosts": [] 00:13:39.928 }, 00:13:39.928 { 00:13:39.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.928 "subtype": "NVMe", 00:13:39.928 "listen_addresses": [ 00:13:39.928 { 00:13:39.928 "trtype": "TCP", 00:13:39.928 "adrfam": "IPv4", 00:13:39.928 "traddr": "10.0.0.2", 00:13:39.928 "trsvcid": "4420" 00:13:39.928 } 00:13:39.928 ], 00:13:39.928 "allow_any_host": true, 00:13:39.928 "hosts": [], 00:13:39.928 "serial_number": "SPDK00000000000001", 00:13:39.928 "model_number": "SPDK bdev Controller", 00:13:39.928 "max_namespaces": 32, 00:13:39.928 "min_cntlid": 1, 00:13:39.928 "max_cntlid": 65519, 00:13:39.928 "namespaces": [ 00:13:39.928 { 00:13:39.928 "nsid": 1, 00:13:39.928 "bdev_name": "Null1", 00:13:39.928 "name": "Null1", 00:13:39.928 "nguid": "DD12BAA9DA394B56A594F1F4125AE19A", 00:13:39.928 "uuid": "dd12baa9-da39-4b56-a594-f1f4125ae19a" 00:13:39.928 } 00:13:39.928 ] 00:13:39.928 }, 00:13:39.928 { 00:13:39.928 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:39.928 "subtype": "NVMe", 00:13:39.928 "listen_addresses": [ 00:13:39.928 { 00:13:39.928 "trtype": "TCP", 00:13:39.928 "adrfam": "IPv4", 00:13:39.928 "traddr": "10.0.0.2", 00:13:39.928 "trsvcid": "4420" 00:13:39.928 } 00:13:39.928 ], 00:13:39.928 "allow_any_host": true, 00:13:39.928 "hosts": [], 00:13:39.928 "serial_number": "SPDK00000000000002", 00:13:39.928 "model_number": "SPDK bdev Controller", 00:13:39.928 "max_namespaces": 32, 00:13:39.928 "min_cntlid": 1, 00:13:39.928 "max_cntlid": 65519, 00:13:39.928 "namespaces": [ 00:13:39.928 { 00:13:39.928 "nsid": 1, 00:13:39.928 "bdev_name": "Null2", 00:13:39.928 "name": "Null2", 00:13:39.928 "nguid": "7D55F445688544909446B489DEF1A5B2", 00:13:39.928 "uuid": "7d55f445-6885-4490-9446-b489def1a5b2" 00:13:39.928 } 00:13:39.928 ] 00:13:39.928 }, 00:13:39.928 { 00:13:39.928 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:39.928 "subtype": "NVMe", 00:13:39.928 "listen_addresses": [ 00:13:39.928 { 00:13:39.928 "trtype": "TCP", 00:13:39.928 "adrfam": "IPv4", 00:13:39.928 "traddr": "10.0.0.2", 00:13:39.928 "trsvcid": "4420" 00:13:39.928 } 00:13:39.928 ], 00:13:39.928 "allow_any_host": true, 00:13:39.928 "hosts": [], 00:13:39.928 "serial_number": "SPDK00000000000003", 00:13:39.928 "model_number": "SPDK bdev Controller", 00:13:39.928 "max_namespaces": 32, 00:13:39.928 "min_cntlid": 1, 00:13:39.928 "max_cntlid": 65519, 00:13:39.928 "namespaces": [ 00:13:39.928 { 00:13:39.928 "nsid": 1, 00:13:39.928 "bdev_name": "Null3", 00:13:39.928 "name": "Null3", 00:13:39.928 "nguid": "DE897AD2503247168E082234554B87CA", 00:13:39.928 "uuid": "de897ad2-5032-4716-8e08-2234554b87ca" 00:13:39.928 } 00:13:39.928 ] 00:13:39.928 }, 00:13:39.928 { 00:13:39.928 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:39.928 "subtype": "NVMe", 00:13:39.928 "listen_addresses": [ 00:13:39.928 { 00:13:39.928 "trtype": "TCP", 00:13:39.928 "adrfam": "IPv4", 00:13:39.928 "traddr": "10.0.0.2", 00:13:39.928 "trsvcid": "4420" 00:13:39.928 } 00:13:39.928 ], 00:13:39.928 "allow_any_host": true, 00:13:39.928 "hosts": [], 00:13:39.928 "serial_number": "SPDK00000000000004", 00:13:39.928 "model_number": "SPDK bdev Controller", 00:13:39.928 "max_namespaces": 32, 00:13:39.928 "min_cntlid": 1, 00:13:39.928 "max_cntlid": 65519, 00:13:39.928 "namespaces": [ 00:13:39.928 { 00:13:39.928 "nsid": 1, 00:13:39.928 "bdev_name": "Null4", 00:13:39.928 "name": "Null4", 00:13:39.928 "nguid": "6C3FEA4918E6404792C908DF0282C486", 00:13:39.928 "uuid": "6c3fea49-18e6-4047-92c9-08df0282c486" 00:13:39.928 } 00:13:39.928 ] 00:13:39.928 } 00:13:39.928 ] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:39.928 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.190 rmmod nvme_tcp 00:13:40.190 rmmod nvme_fabrics 00:13:40.190 rmmod nvme_keyring 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:40.190 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1343668 ']' 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1343668 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1343668 ']' 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1343668 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343668 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343668' 00:13:40.191 killing process with pid 1343668 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1343668 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1343668 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.191 16:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:42.747 00:13:42.747 real 0m10.993s 00:13:42.747 user 0m7.889s 00:13:42.747 sys 0m5.658s 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.747 ************************************ 00:13:42.747 END TEST nvmf_target_discovery 00:13:42.747 ************************************ 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.747 ************************************ 00:13:42.747 START TEST nvmf_referrals 00:13:42.747 ************************************ 00:13:42.747 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:42.747 * Looking for test storage... 00:13:42.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.748 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:49.355 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:49.355 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:49.355 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:49.355 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.355 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.617 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.617 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.617 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:49.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:13:49.618 00:13:49.618 --- 10.0.0.2 ping statistics --- 00:13:49.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.618 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:13:49.618 00:13:49.618 --- 10.0.0.1 ping statistics --- 00:13:49.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.618 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1348031 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1348031 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1348031 ']' 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.618 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.879 [2024-07-25 16:53:09.893349] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:13:49.879 [2024-07-25 16:53:09.893415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.879 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.879 [2024-07-25 16:53:09.964671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.879 [2024-07-25 16:53:10.038443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.879 [2024-07-25 16:53:10.038483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.879 [2024-07-25 16:53:10.038491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.879 [2024-07-25 16:53:10.038498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.879 [2024-07-25 16:53:10.038504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.879 [2024-07-25 16:53:10.038642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.879 [2024-07-25 16:53:10.038762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.879 [2024-07-25 16:53:10.038920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.879 [2024-07-25 16:53:10.038921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.452 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.452 [2024-07-25 16:53:10.721197] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 [2024-07-25 16:53:10.737380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:50.713 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.975 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.236 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:51.497 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:51.497 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:51.497 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:51.497 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:51.497 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.497 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.758 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:52.033 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:52.033 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:52.033 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:52.033 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:52.033 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:52.033 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:52.034 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.295 rmmod nvme_tcp 00:13:52.295 rmmod nvme_fabrics 00:13:52.295 rmmod nvme_keyring 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1348031 ']' 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1348031 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1348031 ']' 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1348031 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348031 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348031' 00:13:52.295 killing process with pid 1348031 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1348031 00:13:52.295 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1348031 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.556 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.472 00:13:54.472 real 0m12.068s 00:13:54.472 user 0m13.028s 00:13:54.472 sys 0m6.017s 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.472 ************************************ 00:13:54.472 END TEST nvmf_referrals 00:13:54.472 ************************************ 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.472 ************************************ 00:13:54.472 START TEST nvmf_connect_disconnect 00:13:54.472 ************************************ 00:13:54.472 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:54.734 * Looking for test storage... 00:13:54.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:54.734 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.735 16:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.884 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.884 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.884 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.884 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.885 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:14:02.885 00:14:02.885 --- 10.0.0.2 ping statistics --- 00:14:02.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.885 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:14:02.885 00:14:02.885 --- 10.0.0.1 ping statistics --- 00:14:02.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.885 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1352788 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1352788 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1352788 ']' 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 [2024-07-25 16:53:22.211546] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:14:02.885 [2024-07-25 16:53:22.211613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.885 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.885 [2024-07-25 16:53:22.283426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.885 [2024-07-25 16:53:22.358360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.885 [2024-07-25 16:53:22.358397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.885 [2024-07-25 16:53:22.358404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.885 [2024-07-25 16:53:22.358411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.885 [2024-07-25 16:53:22.358417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.885 [2024-07-25 16:53:22.358562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.885 [2024-07-25 16:53:22.358677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.885 [2024-07-25 16:53:22.358832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.885 [2024-07-25 16:53:22.358834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.885 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 [2024-07-25 16:53:23.049219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.885 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.886 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.886 [2024-07-25 16:53:23.108660] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.886 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.886 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:02.886 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:02.886 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:07.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.263 rmmod nvme_tcp 00:14:21.263 rmmod nvme_fabrics 00:14:21.263 rmmod nvme_keyring 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1352788 ']' 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1352788 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1352788 ']' 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1352788 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.263 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1352788 00:14:21.264 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:21.264 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:21.264 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1352788' 00:14:21.264 killing process with pid 1352788 00:14:21.264 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1352788 00:14:21.264 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1352788 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.525 16:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.070 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.070 00:14:24.070 real 0m29.023s 00:14:24.070 user 1m19.007s 00:14:24.070 sys 0m6.662s 00:14:24.070 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:24.070 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:24.070 ************************************ 00:14:24.070 END TEST nvmf_connect_disconnect 00:14:24.070 ************************************ 00:14:24.070 16:53:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:24.070 16:53:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:24.070 16:53:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.071 ************************************ 00:14:24.071 START TEST nvmf_multitarget 00:14:24.071 ************************************ 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:24.071 * Looking for test storage... 00:14:24.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.071 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:30.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:30.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:30.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:30.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.664 16:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:14:30.926 00:14:30.926 --- 10.0.0.2 ping statistics --- 00:14:30.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.926 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.409 ms 00:14:30.926 00:14:30.926 --- 10.0.0.1 ping statistics --- 00:14:30.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.926 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:30.926 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1360896 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1360896 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1360896 ']' 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.927 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.927 [2024-07-25 16:53:51.194586] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:14:30.927 [2024-07-25 16:53:51.194651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.187 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.187 [2024-07-25 16:53:51.264813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.187 [2024-07-25 16:53:51.334681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.187 [2024-07-25 16:53:51.334721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.187 [2024-07-25 16:53:51.334729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.187 [2024-07-25 16:53:51.334735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.187 [2024-07-25 16:53:51.334741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.187 [2024-07-25 16:53:51.334884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.187 [2024-07-25 16:53:51.335005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.187 [2024-07-25 16:53:51.335162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.187 [2024-07-25 16:53:51.335163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.818 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.818 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:31.818 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.818 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.818 16:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.818 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.818 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:31.818 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:31.818 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:32.102 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:32.102 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:32.102 "nvmf_tgt_1" 00:14:32.102 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:32.102 "nvmf_tgt_2" 00:14:32.102 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.102 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:32.363 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:32.363 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:32.363 true 00:14:32.363 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:32.363 true 00:14:32.363 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.363 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.624 rmmod nvme_tcp 00:14:32.624 rmmod nvme_fabrics 00:14:32.624 rmmod nvme_keyring 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1360896 ']' 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1360896 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1360896 ']' 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1360896 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360896 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360896' 00:14:32.624 killing process with pid 1360896 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1360896 00:14:32.624 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1360896 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.886 16:53:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.800 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:34.800 00:14:34.800 real 0m11.206s 00:14:34.800 user 0m9.278s 00:14:34.800 sys 0m5.745s 00:14:34.800 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.800 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:34.800 ************************************ 00:14:34.800 END TEST nvmf_multitarget 00:14:34.800 ************************************ 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.062 ************************************ 00:14:35.062 START TEST nvmf_rpc 00:14:35.062 ************************************ 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:35.062 * Looking for test storage... 00:14:35.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:35.062 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.063 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:43.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:43.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.212 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:43.213 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:43.213 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:14:43.213 00:14:43.213 --- 10.0.0.2 ping statistics --- 00:14:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.213 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:14:43.213 00:14:43.213 --- 10.0.0.1 ping statistics --- 00:14:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.213 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1365368 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1365368 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1365368 ']' 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.213 16:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.213 [2024-07-25 16:54:02.561353] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:14:43.213 [2024-07-25 16:54:02.561423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.213 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.213 [2024-07-25 16:54:02.633609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.213 [2024-07-25 16:54:02.708588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.213 [2024-07-25 16:54:02.708624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.213 [2024-07-25 16:54:02.708633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.213 [2024-07-25 16:54:02.708639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.213 [2024-07-25 16:54:02.708645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.213 [2024-07-25 16:54:02.708691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.213 [2024-07-25 16:54:02.708779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.213 [2024-07-25 16:54:02.708930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.213 [2024-07-25 16:54:02.708931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.213 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:43.213 "tick_rate": 2400000000, 00:14:43.213 "poll_groups": [ 00:14:43.213 { 00:14:43.213 "name": "nvmf_tgt_poll_group_000", 00:14:43.213 "admin_qpairs": 0, 00:14:43.213 "io_qpairs": 0, 00:14:43.213 "current_admin_qpairs": 0, 00:14:43.213 "current_io_qpairs": 0, 00:14:43.213 "pending_bdev_io": 0, 00:14:43.213 "completed_nvme_io": 0, 00:14:43.213 "transports": [] 00:14:43.213 }, 00:14:43.213 { 00:14:43.214 "name": "nvmf_tgt_poll_group_001", 00:14:43.214 "admin_qpairs": 0, 00:14:43.214 "io_qpairs": 0, 00:14:43.214 "current_admin_qpairs": 0, 00:14:43.214 "current_io_qpairs": 0, 00:14:43.214 "pending_bdev_io": 0, 00:14:43.214 "completed_nvme_io": 0, 00:14:43.214 "transports": [] 00:14:43.214 }, 00:14:43.214 { 00:14:43.214 "name": "nvmf_tgt_poll_group_002", 00:14:43.214 "admin_qpairs": 0, 00:14:43.214 "io_qpairs": 0, 00:14:43.214 "current_admin_qpairs": 0, 00:14:43.214 "current_io_qpairs": 0, 00:14:43.214 "pending_bdev_io": 0, 00:14:43.214 "completed_nvme_io": 0, 00:14:43.214 "transports": [] 00:14:43.214 }, 00:14:43.214 { 00:14:43.214 "name": "nvmf_tgt_poll_group_003", 00:14:43.214 "admin_qpairs": 0, 00:14:43.214 "io_qpairs": 0, 00:14:43.214 "current_admin_qpairs": 0, 00:14:43.214 "current_io_qpairs": 0, 00:14:43.214 "pending_bdev_io": 0, 00:14:43.214 "completed_nvme_io": 0, 00:14:43.214 "transports": [] 00:14:43.214 } 00:14:43.214 ] 00:14:43.214 }' 00:14:43.214 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:43.214 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:43.214 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:43.214 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:43.214 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:43.214 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:43.474 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:43.474 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.474 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.474 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.474 [2024-07-25 16:54:03.508659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:43.475 "tick_rate": 2400000000, 00:14:43.475 "poll_groups": [ 00:14:43.475 { 00:14:43.475 "name": "nvmf_tgt_poll_group_000", 00:14:43.475 "admin_qpairs": 0, 00:14:43.475 "io_qpairs": 0, 00:14:43.475 "current_admin_qpairs": 0, 00:14:43.475 "current_io_qpairs": 0, 00:14:43.475 "pending_bdev_io": 0, 00:14:43.475 "completed_nvme_io": 0, 00:14:43.475 "transports": [ 00:14:43.475 { 00:14:43.475 "trtype": "TCP" 00:14:43.475 } 00:14:43.475 ] 00:14:43.475 }, 00:14:43.475 { 00:14:43.475 "name": "nvmf_tgt_poll_group_001", 00:14:43.475 "admin_qpairs": 0, 00:14:43.475 "io_qpairs": 0, 00:14:43.475 "current_admin_qpairs": 0, 00:14:43.475 "current_io_qpairs": 0, 00:14:43.475 "pending_bdev_io": 0, 00:14:43.475 "completed_nvme_io": 0, 00:14:43.475 "transports": [ 00:14:43.475 { 00:14:43.475 "trtype": "TCP" 00:14:43.475 } 00:14:43.475 ] 00:14:43.475 }, 00:14:43.475 { 00:14:43.475 "name": "nvmf_tgt_poll_group_002", 00:14:43.475 "admin_qpairs": 0, 00:14:43.475 "io_qpairs": 0, 00:14:43.475 "current_admin_qpairs": 0, 00:14:43.475 "current_io_qpairs": 0, 00:14:43.475 "pending_bdev_io": 0, 00:14:43.475 "completed_nvme_io": 0, 00:14:43.475 "transports": [ 00:14:43.475 { 00:14:43.475 "trtype": "TCP" 00:14:43.475 } 00:14:43.475 ] 00:14:43.475 }, 00:14:43.475 { 00:14:43.475 "name": "nvmf_tgt_poll_group_003", 00:14:43.475 "admin_qpairs": 0, 00:14:43.475 "io_qpairs": 0, 00:14:43.475 "current_admin_qpairs": 0, 00:14:43.475 "current_io_qpairs": 0, 00:14:43.475 "pending_bdev_io": 0, 00:14:43.475 "completed_nvme_io": 0, 00:14:43.475 "transports": [ 00:14:43.475 { 00:14:43.475 "trtype": "TCP" 00:14:43.475 } 00:14:43.475 ] 00:14:43.475 } 00:14:43.475 ] 00:14:43.475 }' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.475 Malloc1 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.475 [2024-07-25 16:54:03.693848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:43.475 [2024-07-25 16:54:03.720731] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:43.475 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:43.475 could not add new controller: failed to write to nvme-fabrics device 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.475 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.736 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.736 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.736 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.736 16:54:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:45.122 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.122 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.122 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.122 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:45.122 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:47.039 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.300 [2024-07-25 16:54:07.435679] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:47.300 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:47.300 could not add new controller: failed to write to nvme-fabrics device 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.300 16:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.215 16:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.215 16:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.215 16:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.215 16:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:49.215 16:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.130 [2024-07-25 16:54:11.196272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.130 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.131 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.131 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.131 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.131 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.131 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:52.518 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.518 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:52.518 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.518 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:52.518 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 [2024-07-25 16:54:14.988042] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.066 16:54:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 16:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.066 16:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:55.066 16:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.066 16:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 16:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.066 16:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.529 16:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.529 16:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.529 16:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.529 16:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:56.529 16:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:58.444 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.705 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.705 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:58.705 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:58.705 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.705 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 [2024-07-25 16:54:18.848051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.706 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.093 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.093 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:00.093 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.093 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:00.093 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.642 [2024-07-25 16:54:22.563253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.642 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:04.030 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.030 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:04.030 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.030 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:04.030 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:05.946 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.208 [2024-07-25 16:54:26.295242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.208 16:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.596 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:07.596 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:07.596 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.596 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:07.596 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 [2024-07-25 16:54:30.023120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 [2024-07-25 16:54:30.087262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 [2024-07-25 16:54:30.147431] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 [2024-07-25 16:54:30.211651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 [2024-07-25 16:54:30.271843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:10.147 "tick_rate": 2400000000, 00:15:10.147 "poll_groups": [ 00:15:10.147 { 00:15:10.147 "name": "nvmf_tgt_poll_group_000", 00:15:10.147 "admin_qpairs": 0, 00:15:10.147 "io_qpairs": 224, 00:15:10.147 "current_admin_qpairs": 0, 00:15:10.147 "current_io_qpairs": 0, 00:15:10.147 "pending_bdev_io": 0, 00:15:10.147 "completed_nvme_io": 226, 00:15:10.147 "transports": [ 00:15:10.147 { 00:15:10.147 "trtype": "TCP" 00:15:10.147 } 00:15:10.147 ] 00:15:10.147 }, 00:15:10.147 { 00:15:10.147 "name": "nvmf_tgt_poll_group_001", 00:15:10.147 "admin_qpairs": 1, 00:15:10.147 "io_qpairs": 223, 00:15:10.147 "current_admin_qpairs": 0, 00:15:10.147 "current_io_qpairs": 0, 00:15:10.147 "pending_bdev_io": 0, 00:15:10.147 "completed_nvme_io": 225, 00:15:10.147 "transports": [ 00:15:10.147 { 00:15:10.147 "trtype": "TCP" 00:15:10.147 } 00:15:10.147 ] 00:15:10.147 }, 00:15:10.147 { 00:15:10.147 "name": "nvmf_tgt_poll_group_002", 00:15:10.147 "admin_qpairs": 6, 00:15:10.147 "io_qpairs": 218, 00:15:10.147 "current_admin_qpairs": 0, 00:15:10.147 "current_io_qpairs": 0, 00:15:10.147 "pending_bdev_io": 0, 00:15:10.147 "completed_nvme_io": 267, 00:15:10.147 "transports": [ 00:15:10.147 { 00:15:10.147 "trtype": "TCP" 00:15:10.147 } 00:15:10.147 ] 00:15:10.147 }, 00:15:10.147 { 00:15:10.147 "name": "nvmf_tgt_poll_group_003", 00:15:10.147 "admin_qpairs": 0, 00:15:10.147 "io_qpairs": 224, 00:15:10.147 "current_admin_qpairs": 0, 00:15:10.147 "current_io_qpairs": 0, 00:15:10.147 "pending_bdev_io": 0, 00:15:10.147 "completed_nvme_io": 521, 00:15:10.147 "transports": [ 00:15:10.147 { 00:15:10.147 "trtype": "TCP" 00:15:10.147 } 00:15:10.147 ] 00:15:10.147 } 00:15:10.147 ] 00:15:10.147 }' 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:10.147 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:10.148 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:10.148 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:10.148 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.409 rmmod nvme_tcp 00:15:10.409 rmmod nvme_fabrics 00:15:10.409 rmmod nvme_keyring 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1365368 ']' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1365368 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1365368 ']' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1365368 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1365368 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1365368' 00:15:10.409 killing process with pid 1365368 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1365368 00:15:10.409 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1365368 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.671 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.587 00:15:12.587 real 0m37.672s 00:15:12.587 user 1m53.899s 00:15:12.587 sys 0m7.270s 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.587 ************************************ 00:15:12.587 END TEST nvmf_rpc 00:15:12.587 ************************************ 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.587 16:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.849 ************************************ 00:15:12.849 START TEST nvmf_invalid 00:15:12.849 ************************************ 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:12.849 * Looking for test storage... 00:15:12.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.849 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.850 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.850 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.850 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.850 16:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:12.850 16:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:12.850 16:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:12.850 16:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:21.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:21.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:21.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:21.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.030 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.031 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:15:21.031 00:15:21.031 --- 10.0.0.2 ping statistics --- 00:15:21.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.031 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:15:21.031 00:15:21.031 --- 10.0.0.1 ping statistics --- 00:15:21.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.031 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1375671 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1375671 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1375671 ']' 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.031 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.031 [2024-07-25 16:54:40.235315] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:15:21.031 [2024-07-25 16:54:40.235370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.031 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.031 [2024-07-25 16:54:40.303901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.031 [2024-07-25 16:54:40.373697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.031 [2024-07-25 16:54:40.373735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.031 [2024-07-25 16:54:40.373742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.031 [2024-07-25 16:54:40.373749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.031 [2024-07-25 16:54:40.373754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.031 [2024-07-25 16:54:40.373814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.031 [2024-07-25 16:54:40.373904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.031 [2024-07-25 16:54:40.374130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.031 [2024-07-25 16:54:40.374131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14428 00:15:21.031 [2024-07-25 16:54:41.201534] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:21.031 { 00:15:21.031 "nqn": "nqn.2016-06.io.spdk:cnode14428", 00:15:21.031 "tgt_name": "foobar", 00:15:21.031 "method": "nvmf_create_subsystem", 00:15:21.031 "req_id": 1 00:15:21.031 } 00:15:21.031 Got JSON-RPC error response 00:15:21.031 response: 00:15:21.031 { 00:15:21.031 "code": -32603, 00:15:21.031 "message": "Unable to find target foobar" 00:15:21.031 }' 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:21.031 { 00:15:21.031 "nqn": "nqn.2016-06.io.spdk:cnode14428", 00:15:21.031 "tgt_name": "foobar", 00:15:21.031 "method": "nvmf_create_subsystem", 00:15:21.031 "req_id": 1 00:15:21.031 } 00:15:21.031 Got JSON-RPC error response 00:15:21.031 response: 00:15:21.031 { 00:15:21.031 "code": -32603, 00:15:21.031 "message": "Unable to find target foobar" 00:15:21.031 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:21.031 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23764 00:15:21.293 [2024-07-25 16:54:41.378118] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23764: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:21.293 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:21.293 { 00:15:21.293 "nqn": "nqn.2016-06.io.spdk:cnode23764", 00:15:21.293 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:21.293 "method": "nvmf_create_subsystem", 00:15:21.293 "req_id": 1 00:15:21.293 } 00:15:21.293 Got JSON-RPC error response 00:15:21.293 response: 00:15:21.293 { 00:15:21.293 "code": -32602, 00:15:21.293 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:21.293 }' 00:15:21.293 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:21.293 { 00:15:21.293 "nqn": "nqn.2016-06.io.spdk:cnode23764", 00:15:21.293 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:21.293 "method": "nvmf_create_subsystem", 00:15:21.293 "req_id": 1 00:15:21.293 } 00:15:21.293 Got JSON-RPC error response 00:15:21.293 response: 00:15:21.293 { 00:15:21.293 "code": -32602, 00:15:21.293 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:21.293 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:21.293 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:21.293 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16070 00:15:21.293 [2024-07-25 16:54:41.554714] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16070: invalid model number 'SPDK_Controller' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:21.556 { 00:15:21.556 "nqn": "nqn.2016-06.io.spdk:cnode16070", 00:15:21.556 "model_number": "SPDK_Controller\u001f", 00:15:21.556 "method": "nvmf_create_subsystem", 00:15:21.556 "req_id": 1 00:15:21.556 } 00:15:21.556 Got JSON-RPC error response 00:15:21.556 response: 00:15:21.556 { 00:15:21.556 "code": -32602, 00:15:21.556 "message": "Invalid MN SPDK_Controller\u001f" 00:15:21.556 }' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:21.556 { 00:15:21.556 "nqn": "nqn.2016-06.io.spdk:cnode16070", 00:15:21.556 "model_number": "SPDK_Controller\u001f", 00:15:21.556 "method": "nvmf_create_subsystem", 00:15:21.556 "req_id": 1 00:15:21.556 } 00:15:21.556 Got JSON-RPC error response 00:15:21.556 response: 00:15:21.556 { 00:15:21.556 "code": -32602, 00:15:21.556 "message": "Invalid MN SPDK_Controller\u001f" 00:15:21.556 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:21.556 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'DkqV #+uV3u=fH;/`GwT3' 00:15:21.557 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'DkqV #+uV3u=fH;/`GwT3' nqn.2016-06.io.spdk:cnode15616 00:15:21.819 [2024-07-25 16:54:41.891799] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15616: invalid serial number 'DkqV #+uV3u=fH;/`GwT3' 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:21.819 { 00:15:21.819 "nqn": "nqn.2016-06.io.spdk:cnode15616", 00:15:21.819 "serial_number": "DkqV #+uV3u=fH;/`GwT3", 00:15:21.819 "method": "nvmf_create_subsystem", 00:15:21.819 "req_id": 1 00:15:21.819 } 00:15:21.819 Got JSON-RPC error response 00:15:21.819 response: 00:15:21.819 { 00:15:21.819 "code": -32602, 00:15:21.819 "message": "Invalid SN DkqV #+uV3u=fH;/`GwT3" 00:15:21.819 }' 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:21.819 { 00:15:21.819 "nqn": "nqn.2016-06.io.spdk:cnode15616", 00:15:21.819 "serial_number": "DkqV #+uV3u=fH;/`GwT3", 00:15:21.819 "method": "nvmf_create_subsystem", 00:15:21.819 "req_id": 1 00:15:21.819 } 00:15:21.819 Got JSON-RPC error response 00:15:21.819 response: 00:15:21.819 { 00:15:21.819 "code": -32602, 00:15:21.819 "message": "Invalid SN DkqV #+uV3u=fH;/`GwT3" 00:15:21.819 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:21.819 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.820 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:22.082 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:15:22.083 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '59Zr/6\)~.x:hRS /dev/null' 00:15:23.910 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.459 00:15:26.459 real 0m13.370s 00:15:26.459 user 0m19.273s 00:15:26.459 sys 0m6.280s 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:26.459 ************************************ 00:15:26.459 END TEST nvmf_invalid 00:15:26.459 ************************************ 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.459 ************************************ 00:15:26.459 START TEST nvmf_connect_stress 00:15:26.459 ************************************ 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.459 * Looking for test storage... 00:15:26.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.459 16:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:34.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:34.608 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:34.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:34.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.608 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:15:34.609 00:15:34.609 --- 10.0.0.2 ping statistics --- 00:15:34.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.609 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:15:34.609 00:15:34.609 --- 10.0.0.1 ping statistics --- 00:15:34.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.609 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1380677 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1380677 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1380677 ']' 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.609 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 [2024-07-25 16:54:53.774222] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:15:34.609 [2024-07-25 16:54:53.774291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.609 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.609 [2024-07-25 16:54:53.862851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.609 [2024-07-25 16:54:53.956639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.609 [2024-07-25 16:54:53.956698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.609 [2024-07-25 16:54:53.956707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.609 [2024-07-25 16:54:53.956714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.609 [2024-07-25 16:54:53.956720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.609 [2024-07-25 16:54:53.956851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.609 [2024-07-25 16:54:53.957019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.609 [2024-07-25 16:54:53.957020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 [2024-07-25 16:54:54.610405] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 [2024-07-25 16:54:54.645119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 NULL1 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1380863 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.609 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.610 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.871 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.871 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:34.871 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.871 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.871 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.444 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.444 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:35.444 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.444 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.444 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.705 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.705 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:35.705 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.705 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.705 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.966 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.967 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:35.967 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.967 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.967 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.228 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.228 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:36.228 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.228 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.228 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.489 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.489 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:36.489 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.489 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.489 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.063 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.063 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:37.063 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.063 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.063 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.324 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.324 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:37.324 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.324 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.324 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.585 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.585 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:37.585 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.585 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.585 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.846 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.846 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:37.846 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.846 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.846 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.107 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.107 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:38.107 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.107 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.107 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.680 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.680 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:38.680 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.680 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.680 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.942 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.942 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:38.942 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.942 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.942 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.204 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.204 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:39.204 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.204 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.204 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.465 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.465 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:39.465 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.465 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.465 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.726 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.726 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:39.726 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.727 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.727 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.361 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.933 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:40.933 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.933 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.933 16:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.195 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.195 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:41.195 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.195 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.195 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.456 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.456 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:41.456 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.456 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.456 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.718 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.718 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:41.718 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.718 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.718 16:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.291 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:42.291 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.291 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.291 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.552 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.552 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:42.552 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.552 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.552 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.813 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.813 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:42.813 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.813 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.813 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.074 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.074 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:43.074 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.074 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.074 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.334 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.334 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:43.334 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.334 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.334 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.905 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.905 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:43.905 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.905 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.905 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.167 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.167 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:44.167 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.167 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.167 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.428 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.428 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:44.428 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.428 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.428 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.689 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1380863 00:15:44.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1380863) - No such process 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1380863 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.689 rmmod nvme_tcp 00:15:44.689 rmmod nvme_fabrics 00:15:44.689 rmmod nvme_keyring 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1380677 ']' 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1380677 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1380677 ']' 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1380677 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.689 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1380677 00:15:44.950 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:44.950 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:44.950 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1380677' 00:15:44.950 killing process with pid 1380677 00:15:44.950 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1380677 00:15:44.950 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1380677 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.950 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.500 00:15:47.500 real 0m20.869s 00:15:47.500 user 0m41.973s 00:15:47.500 sys 0m8.763s 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.500 ************************************ 00:15:47.500 END TEST nvmf_connect_stress 00:15:47.500 ************************************ 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.500 ************************************ 00:15:47.500 START TEST nvmf_fused_ordering 00:15:47.500 ************************************ 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:47.500 * Looking for test storage... 00:15:47.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.500 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.501 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:54.101 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.101 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:54.102 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:54.102 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:54.102 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:54.102 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:54.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:15:54.364 00:15:54.364 --- 10.0.0.2 ping statistics --- 00:15:54.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.364 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:15:54.364 00:15:54.364 --- 10.0.0.1 ping statistics --- 00:15:54.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.364 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1386960 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1386960 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1386960 ']' 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.364 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 [2024-07-25 16:55:14.577823] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:15:54.364 [2024-07-25 16:55:14.577871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.364 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.626 [2024-07-25 16:55:14.658609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.626 [2024-07-25 16:55:14.722553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.626 [2024-07-25 16:55:14.722591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.626 [2024-07-25 16:55:14.722598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.626 [2024-07-25 16:55:14.722604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.626 [2024-07-25 16:55:14.722610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.626 [2024-07-25 16:55:14.722637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 [2024-07-25 16:55:15.401431] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 [2024-07-25 16:55:15.417727] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 NULL1 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.200 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:55.461 [2024-07-25 16:55:15.475185] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:15:55.461 [2024-07-25 16:55:15.475254] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387228 ] 00:15:55.461 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.035 Attached to nqn.2016-06.io.spdk:cnode1 00:15:56.035 Namespace ID: 1 size: 1GB 00:15:56.035 fused_ordering(0) 00:15:56.035 fused_ordering(1) 00:15:56.035 fused_ordering(2) 00:15:56.035 fused_ordering(3) 00:15:56.035 fused_ordering(4) 00:15:56.035 fused_ordering(5) 00:15:56.035 fused_ordering(6) 00:15:56.035 fused_ordering(7) 00:15:56.035 fused_ordering(8) 00:15:56.035 fused_ordering(9) 00:15:56.035 fused_ordering(10) 00:15:56.035 fused_ordering(11) 00:15:56.035 fused_ordering(12) 00:15:56.035 fused_ordering(13) 00:15:56.035 fused_ordering(14) 00:15:56.035 fused_ordering(15) 00:15:56.035 fused_ordering(16) 00:15:56.035 fused_ordering(17) 00:15:56.035 fused_ordering(18) 00:15:56.035 fused_ordering(19) 00:15:56.035 fused_ordering(20) 00:15:56.035 fused_ordering(21) 00:15:56.035 fused_ordering(22) 00:15:56.035 fused_ordering(23) 00:15:56.035 fused_ordering(24) 00:15:56.035 fused_ordering(25) 00:15:56.035 fused_ordering(26) 00:15:56.035 fused_ordering(27) 00:15:56.035 fused_ordering(28) 00:15:56.035 fused_ordering(29) 00:15:56.035 fused_ordering(30) 00:15:56.035 fused_ordering(31) 00:15:56.035 fused_ordering(32) 00:15:56.035 fused_ordering(33) 00:15:56.035 fused_ordering(34) 00:15:56.035 fused_ordering(35) 00:15:56.035 fused_ordering(36) 00:15:56.035 fused_ordering(37) 00:15:56.035 fused_ordering(38) 00:15:56.035 fused_ordering(39) 00:15:56.035 fused_ordering(40) 00:15:56.035 fused_ordering(41) 00:15:56.035 fused_ordering(42) 00:15:56.035 fused_ordering(43) 00:15:56.035 fused_ordering(44) 00:15:56.035 fused_ordering(45) 00:15:56.035 fused_ordering(46) 00:15:56.035 fused_ordering(47) 00:15:56.035 fused_ordering(48) 00:15:56.035 fused_ordering(49) 00:15:56.035 fused_ordering(50) 00:15:56.035 fused_ordering(51) 00:15:56.035 fused_ordering(52) 00:15:56.035 fused_ordering(53) 00:15:56.035 fused_ordering(54) 00:15:56.035 fused_ordering(55) 00:15:56.035 fused_ordering(56) 00:15:56.035 fused_ordering(57) 00:15:56.035 fused_ordering(58) 00:15:56.035 fused_ordering(59) 00:15:56.035 fused_ordering(60) 00:15:56.035 fused_ordering(61) 00:15:56.035 fused_ordering(62) 00:15:56.035 fused_ordering(63) 00:15:56.035 fused_ordering(64) 00:15:56.035 fused_ordering(65) 00:15:56.035 fused_ordering(66) 00:15:56.035 fused_ordering(67) 00:15:56.035 fused_ordering(68) 00:15:56.035 fused_ordering(69) 00:15:56.035 fused_ordering(70) 00:15:56.035 fused_ordering(71) 00:15:56.035 fused_ordering(72) 00:15:56.035 fused_ordering(73) 00:15:56.035 fused_ordering(74) 00:15:56.035 fused_ordering(75) 00:15:56.035 fused_ordering(76) 00:15:56.035 fused_ordering(77) 00:15:56.035 fused_ordering(78) 00:15:56.035 fused_ordering(79) 00:15:56.035 fused_ordering(80) 00:15:56.035 fused_ordering(81) 00:15:56.035 fused_ordering(82) 00:15:56.035 fused_ordering(83) 00:15:56.035 fused_ordering(84) 00:15:56.035 fused_ordering(85) 00:15:56.035 fused_ordering(86) 00:15:56.035 fused_ordering(87) 00:15:56.035 fused_ordering(88) 00:15:56.035 fused_ordering(89) 00:15:56.035 fused_ordering(90) 00:15:56.035 fused_ordering(91) 00:15:56.035 fused_ordering(92) 00:15:56.035 fused_ordering(93) 00:15:56.035 fused_ordering(94) 00:15:56.035 fused_ordering(95) 00:15:56.035 fused_ordering(96) 00:15:56.035 fused_ordering(97) 00:15:56.035 fused_ordering(98) 00:15:56.035 fused_ordering(99) 00:15:56.035 fused_ordering(100) 00:15:56.035 fused_ordering(101) 00:15:56.035 fused_ordering(102) 00:15:56.035 fused_ordering(103) 00:15:56.035 fused_ordering(104) 00:15:56.035 fused_ordering(105) 00:15:56.035 fused_ordering(106) 00:15:56.035 fused_ordering(107) 00:15:56.035 fused_ordering(108) 00:15:56.035 fused_ordering(109) 00:15:56.035 fused_ordering(110) 00:15:56.035 fused_ordering(111) 00:15:56.035 fused_ordering(112) 00:15:56.035 fused_ordering(113) 00:15:56.035 fused_ordering(114) 00:15:56.035 fused_ordering(115) 00:15:56.035 fused_ordering(116) 00:15:56.035 fused_ordering(117) 00:15:56.035 fused_ordering(118) 00:15:56.035 fused_ordering(119) 00:15:56.035 fused_ordering(120) 00:15:56.035 fused_ordering(121) 00:15:56.035 fused_ordering(122) 00:15:56.035 fused_ordering(123) 00:15:56.035 fused_ordering(124) 00:15:56.035 fused_ordering(125) 00:15:56.035 fused_ordering(126) 00:15:56.035 fused_ordering(127) 00:15:56.035 fused_ordering(128) 00:15:56.035 fused_ordering(129) 00:15:56.035 fused_ordering(130) 00:15:56.035 fused_ordering(131) 00:15:56.035 fused_ordering(132) 00:15:56.035 fused_ordering(133) 00:15:56.035 fused_ordering(134) 00:15:56.035 fused_ordering(135) 00:15:56.035 fused_ordering(136) 00:15:56.035 fused_ordering(137) 00:15:56.035 fused_ordering(138) 00:15:56.035 fused_ordering(139) 00:15:56.035 fused_ordering(140) 00:15:56.035 fused_ordering(141) 00:15:56.035 fused_ordering(142) 00:15:56.035 fused_ordering(143) 00:15:56.035 fused_ordering(144) 00:15:56.035 fused_ordering(145) 00:15:56.035 fused_ordering(146) 00:15:56.035 fused_ordering(147) 00:15:56.035 fused_ordering(148) 00:15:56.035 fused_ordering(149) 00:15:56.035 fused_ordering(150) 00:15:56.035 fused_ordering(151) 00:15:56.035 fused_ordering(152) 00:15:56.035 fused_ordering(153) 00:15:56.035 fused_ordering(154) 00:15:56.035 fused_ordering(155) 00:15:56.035 fused_ordering(156) 00:15:56.035 fused_ordering(157) 00:15:56.035 fused_ordering(158) 00:15:56.035 fused_ordering(159) 00:15:56.035 fused_ordering(160) 00:15:56.035 fused_ordering(161) 00:15:56.035 fused_ordering(162) 00:15:56.035 fused_ordering(163) 00:15:56.035 fused_ordering(164) 00:15:56.035 fused_ordering(165) 00:15:56.035 fused_ordering(166) 00:15:56.035 fused_ordering(167) 00:15:56.035 fused_ordering(168) 00:15:56.035 fused_ordering(169) 00:15:56.035 fused_ordering(170) 00:15:56.035 fused_ordering(171) 00:15:56.035 fused_ordering(172) 00:15:56.035 fused_ordering(173) 00:15:56.035 fused_ordering(174) 00:15:56.035 fused_ordering(175) 00:15:56.035 fused_ordering(176) 00:15:56.035 fused_ordering(177) 00:15:56.035 fused_ordering(178) 00:15:56.035 fused_ordering(179) 00:15:56.035 fused_ordering(180) 00:15:56.035 fused_ordering(181) 00:15:56.035 fused_ordering(182) 00:15:56.035 fused_ordering(183) 00:15:56.035 fused_ordering(184) 00:15:56.035 fused_ordering(185) 00:15:56.035 fused_ordering(186) 00:15:56.035 fused_ordering(187) 00:15:56.035 fused_ordering(188) 00:15:56.035 fused_ordering(189) 00:15:56.035 fused_ordering(190) 00:15:56.035 fused_ordering(191) 00:15:56.035 fused_ordering(192) 00:15:56.035 fused_ordering(193) 00:15:56.035 fused_ordering(194) 00:15:56.035 fused_ordering(195) 00:15:56.035 fused_ordering(196) 00:15:56.035 fused_ordering(197) 00:15:56.035 fused_ordering(198) 00:15:56.035 fused_ordering(199) 00:15:56.035 fused_ordering(200) 00:15:56.035 fused_ordering(201) 00:15:56.035 fused_ordering(202) 00:15:56.035 fused_ordering(203) 00:15:56.035 fused_ordering(204) 00:15:56.035 fused_ordering(205) 00:15:56.609 fused_ordering(206) 00:15:56.609 fused_ordering(207) 00:15:56.609 fused_ordering(208) 00:15:56.609 fused_ordering(209) 00:15:56.609 fused_ordering(210) 00:15:56.609 fused_ordering(211) 00:15:56.609 fused_ordering(212) 00:15:56.609 fused_ordering(213) 00:15:56.609 fused_ordering(214) 00:15:56.609 fused_ordering(215) 00:15:56.609 fused_ordering(216) 00:15:56.609 fused_ordering(217) 00:15:56.609 fused_ordering(218) 00:15:56.609 fused_ordering(219) 00:15:56.609 fused_ordering(220) 00:15:56.609 fused_ordering(221) 00:15:56.609 fused_ordering(222) 00:15:56.609 fused_ordering(223) 00:15:56.609 fused_ordering(224) 00:15:56.609 fused_ordering(225) 00:15:56.609 fused_ordering(226) 00:15:56.609 fused_ordering(227) 00:15:56.609 fused_ordering(228) 00:15:56.609 fused_ordering(229) 00:15:56.609 fused_ordering(230) 00:15:56.609 fused_ordering(231) 00:15:56.609 fused_ordering(232) 00:15:56.609 fused_ordering(233) 00:15:56.609 fused_ordering(234) 00:15:56.609 fused_ordering(235) 00:15:56.609 fused_ordering(236) 00:15:56.609 fused_ordering(237) 00:15:56.609 fused_ordering(238) 00:15:56.609 fused_ordering(239) 00:15:56.609 fused_ordering(240) 00:15:56.609 fused_ordering(241) 00:15:56.609 fused_ordering(242) 00:15:56.609 fused_ordering(243) 00:15:56.609 fused_ordering(244) 00:15:56.609 fused_ordering(245) 00:15:56.609 fused_ordering(246) 00:15:56.609 fused_ordering(247) 00:15:56.609 fused_ordering(248) 00:15:56.609 fused_ordering(249) 00:15:56.609 fused_ordering(250) 00:15:56.609 fused_ordering(251) 00:15:56.609 fused_ordering(252) 00:15:56.609 fused_ordering(253) 00:15:56.609 fused_ordering(254) 00:15:56.609 fused_ordering(255) 00:15:56.609 fused_ordering(256) 00:15:56.609 fused_ordering(257) 00:15:56.609 fused_ordering(258) 00:15:56.609 fused_ordering(259) 00:15:56.609 fused_ordering(260) 00:15:56.609 fused_ordering(261) 00:15:56.609 fused_ordering(262) 00:15:56.609 fused_ordering(263) 00:15:56.609 fused_ordering(264) 00:15:56.609 fused_ordering(265) 00:15:56.609 fused_ordering(266) 00:15:56.609 fused_ordering(267) 00:15:56.609 fused_ordering(268) 00:15:56.609 fused_ordering(269) 00:15:56.609 fused_ordering(270) 00:15:56.609 fused_ordering(271) 00:15:56.609 fused_ordering(272) 00:15:56.609 fused_ordering(273) 00:15:56.609 fused_ordering(274) 00:15:56.609 fused_ordering(275) 00:15:56.609 fused_ordering(276) 00:15:56.609 fused_ordering(277) 00:15:56.609 fused_ordering(278) 00:15:56.609 fused_ordering(279) 00:15:56.609 fused_ordering(280) 00:15:56.609 fused_ordering(281) 00:15:56.609 fused_ordering(282) 00:15:56.609 fused_ordering(283) 00:15:56.609 fused_ordering(284) 00:15:56.609 fused_ordering(285) 00:15:56.609 fused_ordering(286) 00:15:56.609 fused_ordering(287) 00:15:56.609 fused_ordering(288) 00:15:56.609 fused_ordering(289) 00:15:56.609 fused_ordering(290) 00:15:56.609 fused_ordering(291) 00:15:56.609 fused_ordering(292) 00:15:56.609 fused_ordering(293) 00:15:56.609 fused_ordering(294) 00:15:56.609 fused_ordering(295) 00:15:56.609 fused_ordering(296) 00:15:56.609 fused_ordering(297) 00:15:56.609 fused_ordering(298) 00:15:56.609 fused_ordering(299) 00:15:56.609 fused_ordering(300) 00:15:56.609 fused_ordering(301) 00:15:56.609 fused_ordering(302) 00:15:56.609 fused_ordering(303) 00:15:56.609 fused_ordering(304) 00:15:56.609 fused_ordering(305) 00:15:56.609 fused_ordering(306) 00:15:56.609 fused_ordering(307) 00:15:56.609 fused_ordering(308) 00:15:56.609 fused_ordering(309) 00:15:56.609 fused_ordering(310) 00:15:56.609 fused_ordering(311) 00:15:56.609 fused_ordering(312) 00:15:56.609 fused_ordering(313) 00:15:56.609 fused_ordering(314) 00:15:56.609 fused_ordering(315) 00:15:56.609 fused_ordering(316) 00:15:56.609 fused_ordering(317) 00:15:56.609 fused_ordering(318) 00:15:56.609 fused_ordering(319) 00:15:56.609 fused_ordering(320) 00:15:56.609 fused_ordering(321) 00:15:56.609 fused_ordering(322) 00:15:56.609 fused_ordering(323) 00:15:56.609 fused_ordering(324) 00:15:56.609 fused_ordering(325) 00:15:56.609 fused_ordering(326) 00:15:56.609 fused_ordering(327) 00:15:56.609 fused_ordering(328) 00:15:56.609 fused_ordering(329) 00:15:56.609 fused_ordering(330) 00:15:56.609 fused_ordering(331) 00:15:56.609 fused_ordering(332) 00:15:56.609 fused_ordering(333) 00:15:56.609 fused_ordering(334) 00:15:56.609 fused_ordering(335) 00:15:56.609 fused_ordering(336) 00:15:56.609 fused_ordering(337) 00:15:56.609 fused_ordering(338) 00:15:56.609 fused_ordering(339) 00:15:56.609 fused_ordering(340) 00:15:56.609 fused_ordering(341) 00:15:56.609 fused_ordering(342) 00:15:56.609 fused_ordering(343) 00:15:56.610 fused_ordering(344) 00:15:56.610 fused_ordering(345) 00:15:56.610 fused_ordering(346) 00:15:56.610 fused_ordering(347) 00:15:56.610 fused_ordering(348) 00:15:56.610 fused_ordering(349) 00:15:56.610 fused_ordering(350) 00:15:56.610 fused_ordering(351) 00:15:56.610 fused_ordering(352) 00:15:56.610 fused_ordering(353) 00:15:56.610 fused_ordering(354) 00:15:56.610 fused_ordering(355) 00:15:56.610 fused_ordering(356) 00:15:56.610 fused_ordering(357) 00:15:56.610 fused_ordering(358) 00:15:56.610 fused_ordering(359) 00:15:56.610 fused_ordering(360) 00:15:56.610 fused_ordering(361) 00:15:56.610 fused_ordering(362) 00:15:56.610 fused_ordering(363) 00:15:56.610 fused_ordering(364) 00:15:56.610 fused_ordering(365) 00:15:56.610 fused_ordering(366) 00:15:56.610 fused_ordering(367) 00:15:56.610 fused_ordering(368) 00:15:56.610 fused_ordering(369) 00:15:56.610 fused_ordering(370) 00:15:56.610 fused_ordering(371) 00:15:56.610 fused_ordering(372) 00:15:56.610 fused_ordering(373) 00:15:56.610 fused_ordering(374) 00:15:56.610 fused_ordering(375) 00:15:56.610 fused_ordering(376) 00:15:56.610 fused_ordering(377) 00:15:56.610 fused_ordering(378) 00:15:56.610 fused_ordering(379) 00:15:56.610 fused_ordering(380) 00:15:56.610 fused_ordering(381) 00:15:56.610 fused_ordering(382) 00:15:56.610 fused_ordering(383) 00:15:56.610 fused_ordering(384) 00:15:56.610 fused_ordering(385) 00:15:56.610 fused_ordering(386) 00:15:56.610 fused_ordering(387) 00:15:56.610 fused_ordering(388) 00:15:56.610 fused_ordering(389) 00:15:56.610 fused_ordering(390) 00:15:56.610 fused_ordering(391) 00:15:56.610 fused_ordering(392) 00:15:56.610 fused_ordering(393) 00:15:56.610 fused_ordering(394) 00:15:56.610 fused_ordering(395) 00:15:56.610 fused_ordering(396) 00:15:56.610 fused_ordering(397) 00:15:56.610 fused_ordering(398) 00:15:56.610 fused_ordering(399) 00:15:56.610 fused_ordering(400) 00:15:56.610 fused_ordering(401) 00:15:56.610 fused_ordering(402) 00:15:56.610 fused_ordering(403) 00:15:56.610 fused_ordering(404) 00:15:56.610 fused_ordering(405) 00:15:56.610 fused_ordering(406) 00:15:56.610 fused_ordering(407) 00:15:56.610 fused_ordering(408) 00:15:56.610 fused_ordering(409) 00:15:56.610 fused_ordering(410) 00:15:57.183 fused_ordering(411) 00:15:57.183 fused_ordering(412) 00:15:57.183 fused_ordering(413) 00:15:57.183 fused_ordering(414) 00:15:57.183 fused_ordering(415) 00:15:57.183 fused_ordering(416) 00:15:57.183 fused_ordering(417) 00:15:57.183 fused_ordering(418) 00:15:57.183 fused_ordering(419) 00:15:57.183 fused_ordering(420) 00:15:57.183 fused_ordering(421) 00:15:57.183 fused_ordering(422) 00:15:57.183 fused_ordering(423) 00:15:57.183 fused_ordering(424) 00:15:57.183 fused_ordering(425) 00:15:57.183 fused_ordering(426) 00:15:57.183 fused_ordering(427) 00:15:57.183 fused_ordering(428) 00:15:57.183 fused_ordering(429) 00:15:57.183 fused_ordering(430) 00:15:57.183 fused_ordering(431) 00:15:57.183 fused_ordering(432) 00:15:57.183 fused_ordering(433) 00:15:57.183 fused_ordering(434) 00:15:57.183 fused_ordering(435) 00:15:57.183 fused_ordering(436) 00:15:57.183 fused_ordering(437) 00:15:57.183 fused_ordering(438) 00:15:57.183 fused_ordering(439) 00:15:57.183 fused_ordering(440) 00:15:57.183 fused_ordering(441) 00:15:57.183 fused_ordering(442) 00:15:57.183 fused_ordering(443) 00:15:57.183 fused_ordering(444) 00:15:57.183 fused_ordering(445) 00:15:57.183 fused_ordering(446) 00:15:57.183 fused_ordering(447) 00:15:57.183 fused_ordering(448) 00:15:57.183 fused_ordering(449) 00:15:57.183 fused_ordering(450) 00:15:57.183 fused_ordering(451) 00:15:57.183 fused_ordering(452) 00:15:57.183 fused_ordering(453) 00:15:57.183 fused_ordering(454) 00:15:57.183 fused_ordering(455) 00:15:57.183 fused_ordering(456) 00:15:57.183 fused_ordering(457) 00:15:57.183 fused_ordering(458) 00:15:57.183 fused_ordering(459) 00:15:57.183 fused_ordering(460) 00:15:57.183 fused_ordering(461) 00:15:57.183 fused_ordering(462) 00:15:57.183 fused_ordering(463) 00:15:57.183 fused_ordering(464) 00:15:57.183 fused_ordering(465) 00:15:57.183 fused_ordering(466) 00:15:57.183 fused_ordering(467) 00:15:57.183 fused_ordering(468) 00:15:57.183 fused_ordering(469) 00:15:57.183 fused_ordering(470) 00:15:57.183 fused_ordering(471) 00:15:57.183 fused_ordering(472) 00:15:57.183 fused_ordering(473) 00:15:57.183 fused_ordering(474) 00:15:57.183 fused_ordering(475) 00:15:57.183 fused_ordering(476) 00:15:57.183 fused_ordering(477) 00:15:57.183 fused_ordering(478) 00:15:57.183 fused_ordering(479) 00:15:57.183 fused_ordering(480) 00:15:57.183 fused_ordering(481) 00:15:57.183 fused_ordering(482) 00:15:57.183 fused_ordering(483) 00:15:57.183 fused_ordering(484) 00:15:57.183 fused_ordering(485) 00:15:57.183 fused_ordering(486) 00:15:57.183 fused_ordering(487) 00:15:57.183 fused_ordering(488) 00:15:57.183 fused_ordering(489) 00:15:57.183 fused_ordering(490) 00:15:57.183 fused_ordering(491) 00:15:57.183 fused_ordering(492) 00:15:57.183 fused_ordering(493) 00:15:57.183 fused_ordering(494) 00:15:57.183 fused_ordering(495) 00:15:57.183 fused_ordering(496) 00:15:57.183 fused_ordering(497) 00:15:57.183 fused_ordering(498) 00:15:57.183 fused_ordering(499) 00:15:57.183 fused_ordering(500) 00:15:57.183 fused_ordering(501) 00:15:57.183 fused_ordering(502) 00:15:57.183 fused_ordering(503) 00:15:57.183 fused_ordering(504) 00:15:57.183 fused_ordering(505) 00:15:57.183 fused_ordering(506) 00:15:57.183 fused_ordering(507) 00:15:57.183 fused_ordering(508) 00:15:57.183 fused_ordering(509) 00:15:57.183 fused_ordering(510) 00:15:57.183 fused_ordering(511) 00:15:57.183 fused_ordering(512) 00:15:57.183 fused_ordering(513) 00:15:57.183 fused_ordering(514) 00:15:57.183 fused_ordering(515) 00:15:57.183 fused_ordering(516) 00:15:57.183 fused_ordering(517) 00:15:57.183 fused_ordering(518) 00:15:57.183 fused_ordering(519) 00:15:57.183 fused_ordering(520) 00:15:57.183 fused_ordering(521) 00:15:57.183 fused_ordering(522) 00:15:57.183 fused_ordering(523) 00:15:57.183 fused_ordering(524) 00:15:57.183 fused_ordering(525) 00:15:57.183 fused_ordering(526) 00:15:57.183 fused_ordering(527) 00:15:57.183 fused_ordering(528) 00:15:57.183 fused_ordering(529) 00:15:57.183 fused_ordering(530) 00:15:57.183 fused_ordering(531) 00:15:57.183 fused_ordering(532) 00:15:57.183 fused_ordering(533) 00:15:57.183 fused_ordering(534) 00:15:57.183 fused_ordering(535) 00:15:57.183 fused_ordering(536) 00:15:57.183 fused_ordering(537) 00:15:57.183 fused_ordering(538) 00:15:57.183 fused_ordering(539) 00:15:57.183 fused_ordering(540) 00:15:57.183 fused_ordering(541) 00:15:57.183 fused_ordering(542) 00:15:57.183 fused_ordering(543) 00:15:57.183 fused_ordering(544) 00:15:57.183 fused_ordering(545) 00:15:57.183 fused_ordering(546) 00:15:57.183 fused_ordering(547) 00:15:57.183 fused_ordering(548) 00:15:57.183 fused_ordering(549) 00:15:57.183 fused_ordering(550) 00:15:57.183 fused_ordering(551) 00:15:57.183 fused_ordering(552) 00:15:57.183 fused_ordering(553) 00:15:57.183 fused_ordering(554) 00:15:57.183 fused_ordering(555) 00:15:57.183 fused_ordering(556) 00:15:57.183 fused_ordering(557) 00:15:57.183 fused_ordering(558) 00:15:57.183 fused_ordering(559) 00:15:57.183 fused_ordering(560) 00:15:57.183 fused_ordering(561) 00:15:57.183 fused_ordering(562) 00:15:57.183 fused_ordering(563) 00:15:57.183 fused_ordering(564) 00:15:57.183 fused_ordering(565) 00:15:57.183 fused_ordering(566) 00:15:57.183 fused_ordering(567) 00:15:57.183 fused_ordering(568) 00:15:57.183 fused_ordering(569) 00:15:57.183 fused_ordering(570) 00:15:57.183 fused_ordering(571) 00:15:57.183 fused_ordering(572) 00:15:57.183 fused_ordering(573) 00:15:57.183 fused_ordering(574) 00:15:57.183 fused_ordering(575) 00:15:57.183 fused_ordering(576) 00:15:57.183 fused_ordering(577) 00:15:57.183 fused_ordering(578) 00:15:57.183 fused_ordering(579) 00:15:57.183 fused_ordering(580) 00:15:57.183 fused_ordering(581) 00:15:57.183 fused_ordering(582) 00:15:57.183 fused_ordering(583) 00:15:57.183 fused_ordering(584) 00:15:57.183 fused_ordering(585) 00:15:57.183 fused_ordering(586) 00:15:57.183 fused_ordering(587) 00:15:57.183 fused_ordering(588) 00:15:57.183 fused_ordering(589) 00:15:57.183 fused_ordering(590) 00:15:57.183 fused_ordering(591) 00:15:57.183 fused_ordering(592) 00:15:57.183 fused_ordering(593) 00:15:57.183 fused_ordering(594) 00:15:57.183 fused_ordering(595) 00:15:57.183 fused_ordering(596) 00:15:57.183 fused_ordering(597) 00:15:57.183 fused_ordering(598) 00:15:57.183 fused_ordering(599) 00:15:57.183 fused_ordering(600) 00:15:57.183 fused_ordering(601) 00:15:57.183 fused_ordering(602) 00:15:57.183 fused_ordering(603) 00:15:57.183 fused_ordering(604) 00:15:57.183 fused_ordering(605) 00:15:57.183 fused_ordering(606) 00:15:57.183 fused_ordering(607) 00:15:57.183 fused_ordering(608) 00:15:57.183 fused_ordering(609) 00:15:57.183 fused_ordering(610) 00:15:57.183 fused_ordering(611) 00:15:57.183 fused_ordering(612) 00:15:57.183 fused_ordering(613) 00:15:57.183 fused_ordering(614) 00:15:57.183 fused_ordering(615) 00:15:58.127 fused_ordering(616) 00:15:58.127 fused_ordering(617) 00:15:58.127 fused_ordering(618) 00:15:58.127 fused_ordering(619) 00:15:58.127 fused_ordering(620) 00:15:58.127 fused_ordering(621) 00:15:58.127 fused_ordering(622) 00:15:58.127 fused_ordering(623) 00:15:58.127 fused_ordering(624) 00:15:58.127 fused_ordering(625) 00:15:58.127 fused_ordering(626) 00:15:58.127 fused_ordering(627) 00:15:58.127 fused_ordering(628) 00:15:58.127 fused_ordering(629) 00:15:58.127 fused_ordering(630) 00:15:58.127 fused_ordering(631) 00:15:58.127 fused_ordering(632) 00:15:58.127 fused_ordering(633) 00:15:58.127 fused_ordering(634) 00:15:58.127 fused_ordering(635) 00:15:58.127 fused_ordering(636) 00:15:58.127 fused_ordering(637) 00:15:58.127 fused_ordering(638) 00:15:58.127 fused_ordering(639) 00:15:58.127 fused_ordering(640) 00:15:58.127 fused_ordering(641) 00:15:58.127 fused_ordering(642) 00:15:58.127 fused_ordering(643) 00:15:58.127 fused_ordering(644) 00:15:58.127 fused_ordering(645) 00:15:58.127 fused_ordering(646) 00:15:58.127 fused_ordering(647) 00:15:58.127 fused_ordering(648) 00:15:58.127 fused_ordering(649) 00:15:58.127 fused_ordering(650) 00:15:58.127 fused_ordering(651) 00:15:58.127 fused_ordering(652) 00:15:58.127 fused_ordering(653) 00:15:58.127 fused_ordering(654) 00:15:58.127 fused_ordering(655) 00:15:58.127 fused_ordering(656) 00:15:58.127 fused_ordering(657) 00:15:58.127 fused_ordering(658) 00:15:58.127 fused_ordering(659) 00:15:58.127 fused_ordering(660) 00:15:58.127 fused_ordering(661) 00:15:58.127 fused_ordering(662) 00:15:58.127 fused_ordering(663) 00:15:58.127 fused_ordering(664) 00:15:58.127 fused_ordering(665) 00:15:58.127 fused_ordering(666) 00:15:58.127 fused_ordering(667) 00:15:58.127 fused_ordering(668) 00:15:58.127 fused_ordering(669) 00:15:58.127 fused_ordering(670) 00:15:58.127 fused_ordering(671) 00:15:58.127 fused_ordering(672) 00:15:58.127 fused_ordering(673) 00:15:58.127 fused_ordering(674) 00:15:58.127 fused_ordering(675) 00:15:58.127 fused_ordering(676) 00:15:58.127 fused_ordering(677) 00:15:58.127 fused_ordering(678) 00:15:58.127 fused_ordering(679) 00:15:58.127 fused_ordering(680) 00:15:58.127 fused_ordering(681) 00:15:58.127 fused_ordering(682) 00:15:58.127 fused_ordering(683) 00:15:58.127 fused_ordering(684) 00:15:58.127 fused_ordering(685) 00:15:58.127 fused_ordering(686) 00:15:58.127 fused_ordering(687) 00:15:58.127 fused_ordering(688) 00:15:58.127 fused_ordering(689) 00:15:58.127 fused_ordering(690) 00:15:58.127 fused_ordering(691) 00:15:58.127 fused_ordering(692) 00:15:58.127 fused_ordering(693) 00:15:58.127 fused_ordering(694) 00:15:58.127 fused_ordering(695) 00:15:58.127 fused_ordering(696) 00:15:58.127 fused_ordering(697) 00:15:58.127 fused_ordering(698) 00:15:58.127 fused_ordering(699) 00:15:58.127 fused_ordering(700) 00:15:58.127 fused_ordering(701) 00:15:58.127 fused_ordering(702) 00:15:58.127 fused_ordering(703) 00:15:58.127 fused_ordering(704) 00:15:58.127 fused_ordering(705) 00:15:58.127 fused_ordering(706) 00:15:58.127 fused_ordering(707) 00:15:58.127 fused_ordering(708) 00:15:58.127 fused_ordering(709) 00:15:58.127 fused_ordering(710) 00:15:58.127 fused_ordering(711) 00:15:58.127 fused_ordering(712) 00:15:58.127 fused_ordering(713) 00:15:58.127 fused_ordering(714) 00:15:58.127 fused_ordering(715) 00:15:58.127 fused_ordering(716) 00:15:58.127 fused_ordering(717) 00:15:58.127 fused_ordering(718) 00:15:58.127 fused_ordering(719) 00:15:58.127 fused_ordering(720) 00:15:58.127 fused_ordering(721) 00:15:58.127 fused_ordering(722) 00:15:58.127 fused_ordering(723) 00:15:58.127 fused_ordering(724) 00:15:58.127 fused_ordering(725) 00:15:58.127 fused_ordering(726) 00:15:58.127 fused_ordering(727) 00:15:58.127 fused_ordering(728) 00:15:58.127 fused_ordering(729) 00:15:58.127 fused_ordering(730) 00:15:58.127 fused_ordering(731) 00:15:58.127 fused_ordering(732) 00:15:58.127 fused_ordering(733) 00:15:58.127 fused_ordering(734) 00:15:58.127 fused_ordering(735) 00:15:58.127 fused_ordering(736) 00:15:58.127 fused_ordering(737) 00:15:58.127 fused_ordering(738) 00:15:58.127 fused_ordering(739) 00:15:58.127 fused_ordering(740) 00:15:58.127 fused_ordering(741) 00:15:58.127 fused_ordering(742) 00:15:58.127 fused_ordering(743) 00:15:58.127 fused_ordering(744) 00:15:58.127 fused_ordering(745) 00:15:58.127 fused_ordering(746) 00:15:58.127 fused_ordering(747) 00:15:58.127 fused_ordering(748) 00:15:58.127 fused_ordering(749) 00:15:58.127 fused_ordering(750) 00:15:58.127 fused_ordering(751) 00:15:58.127 fused_ordering(752) 00:15:58.127 fused_ordering(753) 00:15:58.127 fused_ordering(754) 00:15:58.127 fused_ordering(755) 00:15:58.127 fused_ordering(756) 00:15:58.127 fused_ordering(757) 00:15:58.127 fused_ordering(758) 00:15:58.127 fused_ordering(759) 00:15:58.127 fused_ordering(760) 00:15:58.127 fused_ordering(761) 00:15:58.127 fused_ordering(762) 00:15:58.127 fused_ordering(763) 00:15:58.127 fused_ordering(764) 00:15:58.127 fused_ordering(765) 00:15:58.127 fused_ordering(766) 00:15:58.127 fused_ordering(767) 00:15:58.127 fused_ordering(768) 00:15:58.127 fused_ordering(769) 00:15:58.127 fused_ordering(770) 00:15:58.127 fused_ordering(771) 00:15:58.127 fused_ordering(772) 00:15:58.127 fused_ordering(773) 00:15:58.127 fused_ordering(774) 00:15:58.127 fused_ordering(775) 00:15:58.127 fused_ordering(776) 00:15:58.127 fused_ordering(777) 00:15:58.127 fused_ordering(778) 00:15:58.127 fused_ordering(779) 00:15:58.127 fused_ordering(780) 00:15:58.127 fused_ordering(781) 00:15:58.127 fused_ordering(782) 00:15:58.127 fused_ordering(783) 00:15:58.127 fused_ordering(784) 00:15:58.127 fused_ordering(785) 00:15:58.127 fused_ordering(786) 00:15:58.127 fused_ordering(787) 00:15:58.127 fused_ordering(788) 00:15:58.127 fused_ordering(789) 00:15:58.127 fused_ordering(790) 00:15:58.127 fused_ordering(791) 00:15:58.127 fused_ordering(792) 00:15:58.127 fused_ordering(793) 00:15:58.127 fused_ordering(794) 00:15:58.127 fused_ordering(795) 00:15:58.127 fused_ordering(796) 00:15:58.127 fused_ordering(797) 00:15:58.127 fused_ordering(798) 00:15:58.127 fused_ordering(799) 00:15:58.127 fused_ordering(800) 00:15:58.127 fused_ordering(801) 00:15:58.127 fused_ordering(802) 00:15:58.127 fused_ordering(803) 00:15:58.127 fused_ordering(804) 00:15:58.127 fused_ordering(805) 00:15:58.127 fused_ordering(806) 00:15:58.127 fused_ordering(807) 00:15:58.127 fused_ordering(808) 00:15:58.127 fused_ordering(809) 00:15:58.127 fused_ordering(810) 00:15:58.127 fused_ordering(811) 00:15:58.127 fused_ordering(812) 00:15:58.127 fused_ordering(813) 00:15:58.127 fused_ordering(814) 00:15:58.127 fused_ordering(815) 00:15:58.127 fused_ordering(816) 00:15:58.128 fused_ordering(817) 00:15:58.128 fused_ordering(818) 00:15:58.128 fused_ordering(819) 00:15:58.128 fused_ordering(820) 00:15:58.701 fused_ordering(821) 00:15:58.701 fused_ordering(822) 00:15:58.701 fused_ordering(823) 00:15:58.701 fused_ordering(824) 00:15:58.701 fused_ordering(825) 00:15:58.701 fused_ordering(826) 00:15:58.701 fused_ordering(827) 00:15:58.701 fused_ordering(828) 00:15:58.701 fused_ordering(829) 00:15:58.701 fused_ordering(830) 00:15:58.701 fused_ordering(831) 00:15:58.701 fused_ordering(832) 00:15:58.701 fused_ordering(833) 00:15:58.701 fused_ordering(834) 00:15:58.701 fused_ordering(835) 00:15:58.701 fused_ordering(836) 00:15:58.701 fused_ordering(837) 00:15:58.701 fused_ordering(838) 00:15:58.701 fused_ordering(839) 00:15:58.701 fused_ordering(840) 00:15:58.701 fused_ordering(841) 00:15:58.701 fused_ordering(842) 00:15:58.701 fused_ordering(843) 00:15:58.701 fused_ordering(844) 00:15:58.701 fused_ordering(845) 00:15:58.701 fused_ordering(846) 00:15:58.701 fused_ordering(847) 00:15:58.701 fused_ordering(848) 00:15:58.701 fused_ordering(849) 00:15:58.701 fused_ordering(850) 00:15:58.701 fused_ordering(851) 00:15:58.701 fused_ordering(852) 00:15:58.701 fused_ordering(853) 00:15:58.701 fused_ordering(854) 00:15:58.701 fused_ordering(855) 00:15:58.701 fused_ordering(856) 00:15:58.701 fused_ordering(857) 00:15:58.701 fused_ordering(858) 00:15:58.701 fused_ordering(859) 00:15:58.701 fused_ordering(860) 00:15:58.701 fused_ordering(861) 00:15:58.701 fused_ordering(862) 00:15:58.701 fused_ordering(863) 00:15:58.701 fused_ordering(864) 00:15:58.701 fused_ordering(865) 00:15:58.701 fused_ordering(866) 00:15:58.701 fused_ordering(867) 00:15:58.701 fused_ordering(868) 00:15:58.701 fused_ordering(869) 00:15:58.701 fused_ordering(870) 00:15:58.701 fused_ordering(871) 00:15:58.701 fused_ordering(872) 00:15:58.701 fused_ordering(873) 00:15:58.701 fused_ordering(874) 00:15:58.701 fused_ordering(875) 00:15:58.701 fused_ordering(876) 00:15:58.701 fused_ordering(877) 00:15:58.701 fused_ordering(878) 00:15:58.701 fused_ordering(879) 00:15:58.701 fused_ordering(880) 00:15:58.701 fused_ordering(881) 00:15:58.701 fused_ordering(882) 00:15:58.701 fused_ordering(883) 00:15:58.701 fused_ordering(884) 00:15:58.701 fused_ordering(885) 00:15:58.701 fused_ordering(886) 00:15:58.701 fused_ordering(887) 00:15:58.701 fused_ordering(888) 00:15:58.701 fused_ordering(889) 00:15:58.701 fused_ordering(890) 00:15:58.701 fused_ordering(891) 00:15:58.701 fused_ordering(892) 00:15:58.701 fused_ordering(893) 00:15:58.701 fused_ordering(894) 00:15:58.701 fused_ordering(895) 00:15:58.701 fused_ordering(896) 00:15:58.701 fused_ordering(897) 00:15:58.701 fused_ordering(898) 00:15:58.701 fused_ordering(899) 00:15:58.701 fused_ordering(900) 00:15:58.701 fused_ordering(901) 00:15:58.701 fused_ordering(902) 00:15:58.701 fused_ordering(903) 00:15:58.701 fused_ordering(904) 00:15:58.701 fused_ordering(905) 00:15:58.701 fused_ordering(906) 00:15:58.701 fused_ordering(907) 00:15:58.701 fused_ordering(908) 00:15:58.701 fused_ordering(909) 00:15:58.701 fused_ordering(910) 00:15:58.701 fused_ordering(911) 00:15:58.701 fused_ordering(912) 00:15:58.701 fused_ordering(913) 00:15:58.701 fused_ordering(914) 00:15:58.701 fused_ordering(915) 00:15:58.701 fused_ordering(916) 00:15:58.701 fused_ordering(917) 00:15:58.701 fused_ordering(918) 00:15:58.701 fused_ordering(919) 00:15:58.701 fused_ordering(920) 00:15:58.701 fused_ordering(921) 00:15:58.701 fused_ordering(922) 00:15:58.701 fused_ordering(923) 00:15:58.701 fused_ordering(924) 00:15:58.701 fused_ordering(925) 00:15:58.701 fused_ordering(926) 00:15:58.701 fused_ordering(927) 00:15:58.701 fused_ordering(928) 00:15:58.701 fused_ordering(929) 00:15:58.701 fused_ordering(930) 00:15:58.701 fused_ordering(931) 00:15:58.701 fused_ordering(932) 00:15:58.701 fused_ordering(933) 00:15:58.701 fused_ordering(934) 00:15:58.701 fused_ordering(935) 00:15:58.701 fused_ordering(936) 00:15:58.701 fused_ordering(937) 00:15:58.701 fused_ordering(938) 00:15:58.701 fused_ordering(939) 00:15:58.701 fused_ordering(940) 00:15:58.701 fused_ordering(941) 00:15:58.701 fused_ordering(942) 00:15:58.701 fused_ordering(943) 00:15:58.701 fused_ordering(944) 00:15:58.701 fused_ordering(945) 00:15:58.701 fused_ordering(946) 00:15:58.701 fused_ordering(947) 00:15:58.701 fused_ordering(948) 00:15:58.701 fused_ordering(949) 00:15:58.701 fused_ordering(950) 00:15:58.701 fused_ordering(951) 00:15:58.701 fused_ordering(952) 00:15:58.701 fused_ordering(953) 00:15:58.701 fused_ordering(954) 00:15:58.701 fused_ordering(955) 00:15:58.701 fused_ordering(956) 00:15:58.701 fused_ordering(957) 00:15:58.701 fused_ordering(958) 00:15:58.701 fused_ordering(959) 00:15:58.701 fused_ordering(960) 00:15:58.701 fused_ordering(961) 00:15:58.701 fused_ordering(962) 00:15:58.701 fused_ordering(963) 00:15:58.701 fused_ordering(964) 00:15:58.701 fused_ordering(965) 00:15:58.701 fused_ordering(966) 00:15:58.701 fused_ordering(967) 00:15:58.701 fused_ordering(968) 00:15:58.701 fused_ordering(969) 00:15:58.701 fused_ordering(970) 00:15:58.701 fused_ordering(971) 00:15:58.701 fused_ordering(972) 00:15:58.701 fused_ordering(973) 00:15:58.701 fused_ordering(974) 00:15:58.701 fused_ordering(975) 00:15:58.701 fused_ordering(976) 00:15:58.701 fused_ordering(977) 00:15:58.701 fused_ordering(978) 00:15:58.701 fused_ordering(979) 00:15:58.701 fused_ordering(980) 00:15:58.701 fused_ordering(981) 00:15:58.701 fused_ordering(982) 00:15:58.701 fused_ordering(983) 00:15:58.701 fused_ordering(984) 00:15:58.701 fused_ordering(985) 00:15:58.701 fused_ordering(986) 00:15:58.701 fused_ordering(987) 00:15:58.701 fused_ordering(988) 00:15:58.701 fused_ordering(989) 00:15:58.701 fused_ordering(990) 00:15:58.701 fused_ordering(991) 00:15:58.701 fused_ordering(992) 00:15:58.701 fused_ordering(993) 00:15:58.701 fused_ordering(994) 00:15:58.701 fused_ordering(995) 00:15:58.701 fused_ordering(996) 00:15:58.701 fused_ordering(997) 00:15:58.701 fused_ordering(998) 00:15:58.701 fused_ordering(999) 00:15:58.701 fused_ordering(1000) 00:15:58.701 fused_ordering(1001) 00:15:58.701 fused_ordering(1002) 00:15:58.701 fused_ordering(1003) 00:15:58.701 fused_ordering(1004) 00:15:58.701 fused_ordering(1005) 00:15:58.701 fused_ordering(1006) 00:15:58.701 fused_ordering(1007) 00:15:58.701 fused_ordering(1008) 00:15:58.701 fused_ordering(1009) 00:15:58.701 fused_ordering(1010) 00:15:58.701 fused_ordering(1011) 00:15:58.701 fused_ordering(1012) 00:15:58.701 fused_ordering(1013) 00:15:58.701 fused_ordering(1014) 00:15:58.701 fused_ordering(1015) 00:15:58.701 fused_ordering(1016) 00:15:58.701 fused_ordering(1017) 00:15:58.701 fused_ordering(1018) 00:15:58.701 fused_ordering(1019) 00:15:58.701 fused_ordering(1020) 00:15:58.701 fused_ordering(1021) 00:15:58.701 fused_ordering(1022) 00:15:58.701 fused_ordering(1023) 00:15:58.701 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:58.701 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:58.701 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.701 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:58.701 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.969 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:58.969 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.969 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.969 rmmod nvme_tcp 00:15:58.969 rmmod nvme_fabrics 00:15:58.969 rmmod nvme_keyring 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1386960 ']' 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1386960 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1386960 ']' 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1386960 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1386960 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1386960' 00:15:58.969 killing process with pid 1386960 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1386960 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1386960 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.969 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.573 00:16:01.573 real 0m14.029s 00:16:01.573 user 0m8.137s 00:16:01.573 sys 0m7.675s 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.573 ************************************ 00:16:01.573 END TEST nvmf_fused_ordering 00:16:01.573 ************************************ 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.573 16:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.573 ************************************ 00:16:01.573 START TEST nvmf_ns_masking 00:16:01.573 ************************************ 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:01.574 * Looking for test storage... 00:16:01.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7c097fab-fda4-4cf9-8ecd-fb305aae3568 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d19a12a1-a524-4694-a7a1-3e0604b20e77 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d63cc531-4254-44a6-afe5-7cb542116a90 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.574 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:08.164 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:08.164 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:08.164 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.164 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:08.165 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:08.165 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:08.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:16:08.165 00:16:08.165 --- 10.0.0.2 ping statistics --- 00:16:08.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.165 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:16:08.165 00:16:08.165 --- 10.0.0.1 ping statistics --- 00:16:08.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.165 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1391901 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1391901 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1391901 ']' 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.165 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:08.165 [2024-07-25 16:55:28.188704] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:16:08.165 [2024-07-25 16:55:28.188769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.165 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.165 [2024-07-25 16:55:28.260575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.165 [2024-07-25 16:55:28.335242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.165 [2024-07-25 16:55:28.335279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.165 [2024-07-25 16:55:28.335286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.165 [2024-07-25 16:55:28.335292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.165 [2024-07-25 16:55:28.335298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.165 [2024-07-25 16:55:28.335315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.738 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:09.000 [2024-07-25 16:55:29.138716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.000 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:09.000 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:09.000 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:09.260 Malloc1 00:16:09.260 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:09.260 Malloc2 00:16:09.260 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:09.521 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:09.782 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.782 [2024-07-25 16:55:29.976148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.782 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:09.782 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d63cc531-4254-44a6-afe5-7cb542116a90 -a 10.0.0.2 -s 4420 -i 4 00:16:10.043 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:10.043 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.043 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.043 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:10.043 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:11.958 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.219 [ 0]:0x1 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c653374dc65b4ff2abf8922e2f85e45a 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c653374dc65b4ff2abf8922e2f85e45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.219 [ 0]:0x1 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.219 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c653374dc65b4ff2abf8922e2f85e45a 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c653374dc65b4ff2abf8922e2f85e45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.481 [ 1]:0x2 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.481 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.743 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:12.743 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:12.743 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d63cc531-4254-44a6-afe5-7cb542116a90 -a 10.0.0.2 -s 4420 -i 4 00:16:13.003 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:13.003 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.003 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.003 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:13.003 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:13.003 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:14.919 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.180 [ 0]:0x2 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.180 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.180 [ 0]:0x1 00:16:15.181 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.181 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c653374dc65b4ff2abf8922e2f85e45a 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c653374dc65b4ff2abf8922e2f85e45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.442 [ 1]:0x2 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.442 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.704 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.705 [ 0]:0x2 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.705 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d63cc531-4254-44a6-afe5-7cb542116a90 -a 10.0.0.2 -s 4420 -i 4 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:15.966 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.514 [ 0]:0x1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c653374dc65b4ff2abf8922e2f85e45a 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c653374dc65b4ff2abf8922e2f85e45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.514 [ 1]:0x2 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.514 [ 0]:0x2 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:18.514 [2024-07-25 16:55:38.761438] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:18.514 request: 00:16:18.514 { 00:16:18.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.514 "nsid": 2, 00:16:18.514 "host": "nqn.2016-06.io.spdk:host1", 00:16:18.514 "method": "nvmf_ns_remove_host", 00:16:18.514 "req_id": 1 00:16:18.514 } 00:16:18.514 Got JSON-RPC error response 00:16:18.514 response: 00:16:18.514 { 00:16:18.514 "code": -32602, 00:16:18.514 "message": "Invalid parameters" 00:16:18.514 } 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:18.514 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.515 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.776 [ 0]:0x2 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a56d8b0d804a0f941858283b563d18 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a56d8b0d804a0f941858283b563d18 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1394180 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1394180 /var/tmp/host.sock 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1394180 ']' 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:18.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:18.776 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 [2024-07-25 16:55:39.005669] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:16:18.776 [2024-07-25 16:55:39.005719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394180 ] 00:16:18.776 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.037 [2024-07-25 16:55:39.080929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.037 [2024-07-25 16:55:39.145022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.609 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.609 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:19.609 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.870 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:19.870 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7c097fab-fda4-4cf9-8ecd-fb305aae3568 00:16:19.870 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:19.870 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7C097FABFDA44CF98ECDFB305AAE3568 -i 00:16:20.132 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d19a12a1-a524-4694-a7a1-3e0604b20e77 00:16:20.132 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:20.132 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D19A12A1A5244694A7A13E0604B20E77 -i 00:16:20.393 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:20.393 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:20.654 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:20.654 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:20.915 nvme0n1 00:16:20.915 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:20.915 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:21.223 nvme1n2 00:16:21.223 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:21.223 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:21.223 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:21.223 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:21.223 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7c097fab-fda4-4cf9-8ecd-fb305aae3568 == \7\c\0\9\7\f\a\b\-\f\d\a\4\-\4\c\f\9\-\8\e\c\d\-\f\b\3\0\5\a\a\e\3\5\6\8 ]] 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:21.485 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d19a12a1-a524-4694-a7a1-3e0604b20e77 == \d\1\9\a\1\2\a\1\-\a\5\2\4\-\4\6\9\4\-\a\7\a\1\-\3\e\0\6\0\4\b\2\0\e\7\7 ]] 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1394180 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1394180 ']' 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1394180 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1394180 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1394180' 00:16:21.746 killing process with pid 1394180 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1394180 00:16:21.746 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1394180 00:16:22.008 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.269 rmmod nvme_tcp 00:16:22.269 rmmod nvme_fabrics 00:16:22.269 rmmod nvme_keyring 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1391901 ']' 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1391901 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1391901 ']' 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1391901 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391901 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391901' 00:16:22.269 killing process with pid 1391901 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1391901 00:16:22.269 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1391901 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.532 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.447 00:16:24.447 real 0m23.312s 00:16:24.447 user 0m23.472s 00:16:24.447 sys 0m7.084s 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:24.447 ************************************ 00:16:24.447 END TEST nvmf_ns_masking 00:16:24.447 ************************************ 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.447 16:55:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 ************************************ 00:16:24.709 START TEST nvmf_nvme_cli 00:16:24.709 ************************************ 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:24.709 * Looking for test storage... 00:16:24.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.709 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.710 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:32.859 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:32.859 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:32.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:32.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:32.859 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:32.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:16:32.860 00:16:32.860 --- 10.0.0.2 ping statistics --- 00:16:32.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.860 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:16:32.860 00:16:32.860 --- 10.0.0.1 ping statistics --- 00:16:32.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.860 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1399088 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1399088 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1399088 ']' 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.860 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 [2024-07-25 16:55:52.022519] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:16:32.860 [2024-07-25 16:55:52.022569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.860 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.860 [2024-07-25 16:55:52.090324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.860 [2024-07-25 16:55:52.156748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.860 [2024-07-25 16:55:52.156787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.860 [2024-07-25 16:55:52.156795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.860 [2024-07-25 16:55:52.156801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.860 [2024-07-25 16:55:52.156807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.860 [2024-07-25 16:55:52.156951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.860 [2024-07-25 16:55:52.157064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.860 [2024-07-25 16:55:52.157251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.860 [2024-07-25 16:55:52.157271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 [2024-07-25 16:55:52.847213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 Malloc0 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 Malloc1 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 [2024-07-25 16:55:52.937132] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.860 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.861 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:32.861 00:16:32.861 Discovery Log Number of Records 2, Generation counter 2 00:16:32.861 =====Discovery Log Entry 0====== 00:16:32.861 trtype: tcp 00:16:32.861 adrfam: ipv4 00:16:32.861 subtype: current discovery subsystem 00:16:32.861 treq: not required 00:16:32.861 portid: 0 00:16:32.861 trsvcid: 4420 00:16:32.861 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:32.861 traddr: 10.0.0.2 00:16:32.861 eflags: explicit discovery connections, duplicate discovery information 00:16:32.861 sectype: none 00:16:32.861 =====Discovery Log Entry 1====== 00:16:32.861 trtype: tcp 00:16:32.861 adrfam: ipv4 00:16:32.861 subtype: nvme subsystem 00:16:32.861 treq: not required 00:16:32.861 portid: 0 00:16:32.861 trsvcid: 4420 00:16:32.861 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:32.861 traddr: 10.0.0.2 00:16:32.861 eflags: none 00:16:32.861 sectype: none 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:32.861 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.773 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:34.773 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.773 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.773 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:34.773 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:34.773 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:36.690 /dev/nvme0n1 ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:36.690 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.691 rmmod nvme_tcp 00:16:36.691 rmmod nvme_fabrics 00:16:36.691 rmmod nvme_keyring 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1399088 ']' 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1399088 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1399088 ']' 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1399088 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.691 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1399088 00:16:36.953 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.953 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.953 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1399088' 00:16:36.953 killing process with pid 1399088 00:16:36.953 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1399088 00:16:36.953 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1399088 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.953 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.502 00:16:39.502 real 0m14.460s 00:16:39.502 user 0m21.896s 00:16:39.502 sys 0m5.843s 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.502 ************************************ 00:16:39.502 END TEST nvmf_nvme_cli 00:16:39.502 ************************************ 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.502 ************************************ 00:16:39.502 START TEST nvmf_vfio_user 00:16:39.502 ************************************ 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:39.502 * Looking for test storage... 00:16:39.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.502 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1400587 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1400587' 00:16:39.503 Process pid: 1400587 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1400587 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1400587 ']' 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.503 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:39.503 [2024-07-25 16:55:59.490232] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:16:39.503 [2024-07-25 16:55:59.490288] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.503 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.503 [2024-07-25 16:55:59.554762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.503 [2024-07-25 16:55:59.629706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.503 [2024-07-25 16:55:59.629744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.503 [2024-07-25 16:55:59.629751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.503 [2024-07-25 16:55:59.629758] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.503 [2024-07-25 16:55:59.629764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.503 [2024-07-25 16:55:59.629905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.503 [2024-07-25 16:55:59.630034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.503 [2024-07-25 16:55:59.630193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.503 [2024-07-25 16:55:59.630194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.075 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.075 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:40.075 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:41.020 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:41.281 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:41.281 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:41.281 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:41.281 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:41.281 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:41.543 Malloc1 00:16:41.543 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:41.543 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:41.804 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:42.066 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:42.066 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:42.066 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:42.066 Malloc2 00:16:42.066 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:42.327 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:42.591 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:42.591 [2024-07-25 16:56:02.820424] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:16:42.591 [2024-07-25 16:56:02.820468] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401283 ] 00:16:42.591 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.591 [2024-07-25 16:56:02.852900] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:42.591 [2024-07-25 16:56:02.857615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:42.591 [2024-07-25 16:56:02.857636] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3af58f7000 00:16:42.591 [2024-07-25 16:56:02.858618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.591 [2024-07-25 16:56:02.859612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.591 [2024-07-25 16:56:02.860617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.591 [2024-07-25 16:56:02.861628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.591 [2024-07-25 16:56:02.862628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.591 [2024-07-25 16:56:02.863634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.863 [2024-07-25 16:56:02.864639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.863 [2024-07-25 16:56:02.865653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.863 [2024-07-25 16:56:02.866658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:42.863 [2024-07-25 16:56:02.866666] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3af58ec000 00:16:42.863 [2024-07-25 16:56:02.867992] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:42.863 [2024-07-25 16:56:02.888367] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:42.863 [2024-07-25 16:56:02.888385] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:42.863 [2024-07-25 16:56:02.890795] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:42.863 [2024-07-25 16:56:02.890842] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:42.863 [2024-07-25 16:56:02.890933] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:42.863 [2024-07-25 16:56:02.890949] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:42.863 [2024-07-25 16:56:02.890955] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:42.863 [2024-07-25 16:56:02.891795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:42.863 [2024-07-25 16:56:02.891806] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:42.863 [2024-07-25 16:56:02.891814] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:42.863 [2024-07-25 16:56:02.892796] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:42.863 [2024-07-25 16:56:02.892805] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:42.863 [2024-07-25 16:56:02.892812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:42.863 [2024-07-25 16:56:02.893806] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:42.863 [2024-07-25 16:56:02.893815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:42.863 [2024-07-25 16:56:02.894815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:42.863 [2024-07-25 16:56:02.894824] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:42.863 [2024-07-25 16:56:02.894829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:42.863 [2024-07-25 16:56:02.894836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:42.863 [2024-07-25 16:56:02.894944] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:42.864 [2024-07-25 16:56:02.894949] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:42.864 [2024-07-25 16:56:02.894954] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:42.864 [2024-07-25 16:56:02.895819] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:42.864 [2024-07-25 16:56:02.896822] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:42.864 [2024-07-25 16:56:02.897832] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:42.864 [2024-07-25 16:56:02.898829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:42.864 [2024-07-25 16:56:02.898884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:42.864 [2024-07-25 16:56:02.899840] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:42.864 [2024-07-25 16:56:02.899848] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:42.864 [2024-07-25 16:56:02.899852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.899874] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:42.864 [2024-07-25 16:56:02.899881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.899896] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.864 [2024-07-25 16:56:02.899901] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.864 [2024-07-25 16:56:02.899905] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.864 [2024-07-25 16:56:02.899919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.899950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.899961] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:42.864 [2024-07-25 16:56:02.899965] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:42.864 [2024-07-25 16:56:02.899970] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:42.864 [2024-07-25 16:56:02.899974] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:42.864 [2024-07-25 16:56:02.899979] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:42.864 [2024-07-25 16:56:02.899984] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:42.864 [2024-07-25 16:56:02.899988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.899996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.864 [2024-07-25 16:56:02.900048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.864 [2024-07-25 16:56:02.900056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.864 [2024-07-25 16:56:02.900064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.864 [2024-07-25 16:56:02.900069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900101] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:42.864 [2024-07-25 16:56:02.900107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900205] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900213] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900221] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:42.864 [2024-07-25 16:56:02.900226] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:42.864 [2024-07-25 16:56:02.900229] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.864 [2024-07-25 16:56:02.900235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900260] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:42.864 [2024-07-25 16:56:02.900272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900290] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.864 [2024-07-25 16:56:02.900294] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.864 [2024-07-25 16:56:02.900297] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.864 [2024-07-25 16:56:02.900303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900348] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.864 [2024-07-25 16:56:02.900353] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.864 [2024-07-25 16:56:02.900356] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.864 [2024-07-25 16:56:02.900362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900383] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900414] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:42.864 [2024-07-25 16:56:02.900419] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:42.864 [2024-07-25 16:56:02.900424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:42.864 [2024-07-25 16:56:02.900442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:42.864 [2024-07-25 16:56:02.900472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:42.864 [2024-07-25 16:56:02.900483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:42.865 [2024-07-25 16:56:02.900490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:42.865 [2024-07-25 16:56:02.900500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:42.865 [2024-07-25 16:56:02.900512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:42.865 [2024-07-25 16:56:02.900525] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:42.865 [2024-07-25 16:56:02.900529] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:42.865 [2024-07-25 16:56:02.900533] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:42.865 [2024-07-25 16:56:02.900537] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:42.865 [2024-07-25 16:56:02.900540] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:42.865 [2024-07-25 16:56:02.900546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:42.865 [2024-07-25 16:56:02.900553] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:42.865 [2024-07-25 16:56:02.900558] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:42.865 [2024-07-25 16:56:02.900561] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.865 [2024-07-25 16:56:02.900567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:42.865 [2024-07-25 16:56:02.900574] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:42.865 [2024-07-25 16:56:02.900578] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.865 [2024-07-25 16:56:02.900582] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.865 [2024-07-25 16:56:02.900587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.865 [2024-07-25 16:56:02.900595] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:42.865 [2024-07-25 16:56:02.900599] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:42.865 [2024-07-25 16:56:02.900602] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.865 [2024-07-25 16:56:02.900608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:42.865 [2024-07-25 16:56:02.900615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:42.865 [2024-07-25 16:56:02.900627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:42.865 [2024-07-25 16:56:02.900638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:42.865 [2024-07-25 16:56:02.900645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:42.865 ===================================================== 00:16:42.865 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:42.865 ===================================================== 00:16:42.865 Controller Capabilities/Features 00:16:42.865 ================================ 00:16:42.865 Vendor ID: 4e58 00:16:42.865 Subsystem Vendor ID: 4e58 00:16:42.865 Serial Number: SPDK1 00:16:42.865 Model Number: SPDK bdev Controller 00:16:42.865 Firmware Version: 24.09 00:16:42.865 Recommended Arb Burst: 6 00:16:42.865 IEEE OUI Identifier: 8d 6b 50 00:16:42.865 Multi-path I/O 00:16:42.865 May have multiple subsystem ports: Yes 00:16:42.865 May have multiple controllers: Yes 00:16:42.865 Associated with SR-IOV VF: No 00:16:42.865 Max Data Transfer Size: 131072 00:16:42.865 Max Number of Namespaces: 32 00:16:42.865 Max Number of I/O Queues: 127 00:16:42.865 NVMe Specification Version (VS): 1.3 00:16:42.865 NVMe Specification Version (Identify): 1.3 00:16:42.865 Maximum Queue Entries: 256 00:16:42.865 Contiguous Queues Required: Yes 00:16:42.865 Arbitration Mechanisms Supported 00:16:42.865 Weighted Round Robin: Not Supported 00:16:42.865 Vendor Specific: Not Supported 00:16:42.865 Reset Timeout: 15000 ms 00:16:42.865 Doorbell Stride: 4 bytes 00:16:42.865 NVM Subsystem Reset: Not Supported 00:16:42.865 Command Sets Supported 00:16:42.865 NVM Command Set: Supported 00:16:42.865 Boot Partition: Not Supported 00:16:42.865 Memory Page Size Minimum: 4096 bytes 00:16:42.865 Memory Page Size Maximum: 4096 bytes 00:16:42.865 Persistent Memory Region: Not Supported 00:16:42.865 Optional Asynchronous Events Supported 00:16:42.865 Namespace Attribute Notices: Supported 00:16:42.865 Firmware Activation Notices: Not Supported 00:16:42.865 ANA Change Notices: Not Supported 00:16:42.865 PLE Aggregate Log Change Notices: Not Supported 00:16:42.865 LBA Status Info Alert Notices: Not Supported 00:16:42.865 EGE Aggregate Log Change Notices: Not Supported 00:16:42.865 Normal NVM Subsystem Shutdown event: Not Supported 00:16:42.865 Zone Descriptor Change Notices: Not Supported 00:16:42.865 Discovery Log Change Notices: Not Supported 00:16:42.865 Controller Attributes 00:16:42.865 128-bit Host Identifier: Supported 00:16:42.865 Non-Operational Permissive Mode: Not Supported 00:16:42.865 NVM Sets: Not Supported 00:16:42.865 Read Recovery Levels: Not Supported 00:16:42.865 Endurance Groups: Not Supported 00:16:42.865 Predictable Latency Mode: Not Supported 00:16:42.865 Traffic Based Keep ALive: Not Supported 00:16:42.865 Namespace Granularity: Not Supported 00:16:42.865 SQ Associations: Not Supported 00:16:42.865 UUID List: Not Supported 00:16:42.865 Multi-Domain Subsystem: Not Supported 00:16:42.865 Fixed Capacity Management: Not Supported 00:16:42.865 Variable Capacity Management: Not Supported 00:16:42.865 Delete Endurance Group: Not Supported 00:16:42.865 Delete NVM Set: Not Supported 00:16:42.865 Extended LBA Formats Supported: Not Supported 00:16:42.865 Flexible Data Placement Supported: Not Supported 00:16:42.865 00:16:42.865 Controller Memory Buffer Support 00:16:42.865 ================================ 00:16:42.865 Supported: No 00:16:42.865 00:16:42.865 Persistent Memory Region Support 00:16:42.865 ================================ 00:16:42.865 Supported: No 00:16:42.865 00:16:42.865 Admin Command Set Attributes 00:16:42.865 ============================ 00:16:42.865 Security Send/Receive: Not Supported 00:16:42.865 Format NVM: Not Supported 00:16:42.865 Firmware Activate/Download: Not Supported 00:16:42.865 Namespace Management: Not Supported 00:16:42.865 Device Self-Test: Not Supported 00:16:42.865 Directives: Not Supported 00:16:42.865 NVMe-MI: Not Supported 00:16:42.865 Virtualization Management: Not Supported 00:16:42.865 Doorbell Buffer Config: Not Supported 00:16:42.865 Get LBA Status Capability: Not Supported 00:16:42.865 Command & Feature Lockdown Capability: Not Supported 00:16:42.865 Abort Command Limit: 4 00:16:42.865 Async Event Request Limit: 4 00:16:42.865 Number of Firmware Slots: N/A 00:16:42.865 Firmware Slot 1 Read-Only: N/A 00:16:42.865 Firmware Activation Without Reset: N/A 00:16:42.865 Multiple Update Detection Support: N/A 00:16:42.865 Firmware Update Granularity: No Information Provided 00:16:42.865 Per-Namespace SMART Log: No 00:16:42.865 Asymmetric Namespace Access Log Page: Not Supported 00:16:42.865 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:42.865 Command Effects Log Page: Supported 00:16:42.865 Get Log Page Extended Data: Supported 00:16:42.865 Telemetry Log Pages: Not Supported 00:16:42.865 Persistent Event Log Pages: Not Supported 00:16:42.865 Supported Log Pages Log Page: May Support 00:16:42.865 Commands Supported & Effects Log Page: Not Supported 00:16:42.865 Feature Identifiers & Effects Log Page:May Support 00:16:42.865 NVMe-MI Commands & Effects Log Page: May Support 00:16:42.865 Data Area 4 for Telemetry Log: Not Supported 00:16:42.865 Error Log Page Entries Supported: 128 00:16:42.865 Keep Alive: Supported 00:16:42.865 Keep Alive Granularity: 10000 ms 00:16:42.865 00:16:42.865 NVM Command Set Attributes 00:16:42.865 ========================== 00:16:42.865 Submission Queue Entry Size 00:16:42.865 Max: 64 00:16:42.865 Min: 64 00:16:42.865 Completion Queue Entry Size 00:16:42.865 Max: 16 00:16:42.865 Min: 16 00:16:42.865 Number of Namespaces: 32 00:16:42.865 Compare Command: Supported 00:16:42.865 Write Uncorrectable Command: Not Supported 00:16:42.865 Dataset Management Command: Supported 00:16:42.865 Write Zeroes Command: Supported 00:16:42.865 Set Features Save Field: Not Supported 00:16:42.865 Reservations: Not Supported 00:16:42.865 Timestamp: Not Supported 00:16:42.865 Copy: Supported 00:16:42.865 Volatile Write Cache: Present 00:16:42.865 Atomic Write Unit (Normal): 1 00:16:42.865 Atomic Write Unit (PFail): 1 00:16:42.865 Atomic Compare & Write Unit: 1 00:16:42.865 Fused Compare & Write: Supported 00:16:42.865 Scatter-Gather List 00:16:42.866 SGL Command Set: Supported (Dword aligned) 00:16:42.866 SGL Keyed: Not Supported 00:16:42.866 SGL Bit Bucket Descriptor: Not Supported 00:16:42.866 SGL Metadata Pointer: Not Supported 00:16:42.866 Oversized SGL: Not Supported 00:16:42.866 SGL Metadata Address: Not Supported 00:16:42.866 SGL Offset: Not Supported 00:16:42.866 Transport SGL Data Block: Not Supported 00:16:42.866 Replay Protected Memory Block: Not Supported 00:16:42.866 00:16:42.866 Firmware Slot Information 00:16:42.866 ========================= 00:16:42.866 Active slot: 1 00:16:42.866 Slot 1 Firmware Revision: 24.09 00:16:42.866 00:16:42.866 00:16:42.866 Commands Supported and Effects 00:16:42.866 ============================== 00:16:42.866 Admin Commands 00:16:42.866 -------------- 00:16:42.866 Get Log Page (02h): Supported 00:16:42.866 Identify (06h): Supported 00:16:42.866 Abort (08h): Supported 00:16:42.866 Set Features (09h): Supported 00:16:42.866 Get Features (0Ah): Supported 00:16:42.866 Asynchronous Event Request (0Ch): Supported 00:16:42.866 Keep Alive (18h): Supported 00:16:42.866 I/O Commands 00:16:42.866 ------------ 00:16:42.866 Flush (00h): Supported LBA-Change 00:16:42.866 Write (01h): Supported LBA-Change 00:16:42.866 Read (02h): Supported 00:16:42.866 Compare (05h): Supported 00:16:42.866 Write Zeroes (08h): Supported LBA-Change 00:16:42.866 Dataset Management (09h): Supported LBA-Change 00:16:42.866 Copy (19h): Supported LBA-Change 00:16:42.866 00:16:42.866 Error Log 00:16:42.866 ========= 00:16:42.866 00:16:42.866 Arbitration 00:16:42.866 =========== 00:16:42.866 Arbitration Burst: 1 00:16:42.866 00:16:42.866 Power Management 00:16:42.866 ================ 00:16:42.866 Number of Power States: 1 00:16:42.866 Current Power State: Power State #0 00:16:42.866 Power State #0: 00:16:42.866 Max Power: 0.00 W 00:16:42.866 Non-Operational State: Operational 00:16:42.866 Entry Latency: Not Reported 00:16:42.866 Exit Latency: Not Reported 00:16:42.866 Relative Read Throughput: 0 00:16:42.866 Relative Read Latency: 0 00:16:42.866 Relative Write Throughput: 0 00:16:42.866 Relative Write Latency: 0 00:16:42.866 Idle Power: Not Reported 00:16:42.866 Active Power: Not Reported 00:16:42.866 Non-Operational Permissive Mode: Not Supported 00:16:42.866 00:16:42.866 Health Information 00:16:42.866 ================== 00:16:42.866 Critical Warnings: 00:16:42.866 Available Spare Space: OK 00:16:42.866 Temperature: OK 00:16:42.866 Device Reliability: OK 00:16:42.866 Read Only: No 00:16:42.866 Volatile Memory Backup: OK 00:16:42.866 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:42.866 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:42.866 Available Spare: 0% 00:16:42.866 Available Sp[2024-07-25 16:56:02.900744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:42.866 [2024-07-25 16:56:02.900755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:42.866 [2024-07-25 16:56:02.900783] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:42.866 [2024-07-25 16:56:02.900793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.866 [2024-07-25 16:56:02.900799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.866 [2024-07-25 16:56:02.900805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.866 [2024-07-25 16:56:02.900812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.866 [2024-07-25 16:56:02.903210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:42.866 [2024-07-25 16:56:02.903221] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:42.866 [2024-07-25 16:56:02.903858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:42.866 [2024-07-25 16:56:02.903898] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:42.866 [2024-07-25 16:56:02.903904] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:42.866 [2024-07-25 16:56:02.904861] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:42.866 [2024-07-25 16:56:02.904873] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:42.866 [2024-07-25 16:56:02.904930] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:42.866 [2024-07-25 16:56:02.908209] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:42.866 are Threshold: 0% 00:16:42.866 Life Percentage Used: 0% 00:16:42.866 Data Units Read: 0 00:16:42.866 Data Units Written: 0 00:16:42.866 Host Read Commands: 0 00:16:42.866 Host Write Commands: 0 00:16:42.866 Controller Busy Time: 0 minutes 00:16:42.866 Power Cycles: 0 00:16:42.866 Power On Hours: 0 hours 00:16:42.866 Unsafe Shutdowns: 0 00:16:42.866 Unrecoverable Media Errors: 0 00:16:42.866 Lifetime Error Log Entries: 0 00:16:42.866 Warning Temperature Time: 0 minutes 00:16:42.866 Critical Temperature Time: 0 minutes 00:16:42.866 00:16:42.866 Number of Queues 00:16:42.866 ================ 00:16:42.866 Number of I/O Submission Queues: 127 00:16:42.866 Number of I/O Completion Queues: 127 00:16:42.866 00:16:42.866 Active Namespaces 00:16:42.866 ================= 00:16:42.866 Namespace ID:1 00:16:42.866 Error Recovery Timeout: Unlimited 00:16:42.866 Command Set Identifier: NVM (00h) 00:16:42.866 Deallocate: Supported 00:16:42.866 Deallocated/Unwritten Error: Not Supported 00:16:42.866 Deallocated Read Value: Unknown 00:16:42.866 Deallocate in Write Zeroes: Not Supported 00:16:42.866 Deallocated Guard Field: 0xFFFF 00:16:42.866 Flush: Supported 00:16:42.866 Reservation: Supported 00:16:42.866 Namespace Sharing Capabilities: Multiple Controllers 00:16:42.866 Size (in LBAs): 131072 (0GiB) 00:16:42.866 Capacity (in LBAs): 131072 (0GiB) 00:16:42.866 Utilization (in LBAs): 131072 (0GiB) 00:16:42.866 NGUID: 7CD6327F34E3445D879252C99BA37F9E 00:16:42.866 UUID: 7cd6327f-34e3-445d-8792-52c99ba37f9e 00:16:42.866 Thin Provisioning: Not Supported 00:16:42.866 Per-NS Atomic Units: Yes 00:16:42.866 Atomic Boundary Size (Normal): 0 00:16:42.866 Atomic Boundary Size (PFail): 0 00:16:42.866 Atomic Boundary Offset: 0 00:16:42.866 Maximum Single Source Range Length: 65535 00:16:42.866 Maximum Copy Length: 65535 00:16:42.866 Maximum Source Range Count: 1 00:16:42.866 NGUID/EUI64 Never Reused: No 00:16:42.866 Namespace Write Protected: No 00:16:42.866 Number of LBA Formats: 1 00:16:42.866 Current LBA Format: LBA Format #00 00:16:42.866 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:42.866 00:16:42.866 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:42.866 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.866 [2024-07-25 16:56:03.091816] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:48.222 Initializing NVMe Controllers 00:16:48.222 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:48.222 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:48.222 Initialization complete. Launching workers. 00:16:48.222 ======================================================== 00:16:48.222 Latency(us) 00:16:48.222 Device Information : IOPS MiB/s Average min max 00:16:48.222 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39987.65 156.20 3200.87 847.00 6801.98 00:16:48.222 ======================================================== 00:16:48.222 Total : 39987.65 156.20 3200.87 847.00 6801.98 00:16:48.222 00:16:48.222 [2024-07-25 16:56:08.112961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:48.222 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:48.222 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.222 [2024-07-25 16:56:08.291839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:53.528 Initializing NVMe Controllers 00:16:53.528 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:53.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:53.528 Initialization complete. Launching workers. 00:16:53.528 ======================================================== 00:16:53.528 Latency(us) 00:16:53.528 Device Information : IOPS MiB/s Average min max 00:16:53.528 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.19 62.73 7976.07 4987.26 9977.44 00:16:53.528 ======================================================== 00:16:53.528 Total : 16059.19 62.73 7976.07 4987.26 9977.44 00:16:53.528 00:16:53.528 [2024-07-25 16:56:13.331043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:53.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:53.528 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.528 [2024-07-25 16:56:13.523936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:58.822 [2024-07-25 16:56:18.597423] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:58.822 Initializing NVMe Controllers 00:16:58.822 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:58.822 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:58.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:58.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:58.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:58.822 Initialization complete. Launching workers. 00:16:58.822 Starting thread on core 2 00:16:58.822 Starting thread on core 3 00:16:58.822 Starting thread on core 1 00:16:58.822 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:58.822 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.822 [2024-07-25 16:56:18.856603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:02.126 [2024-07-25 16:56:21.910277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:02.126 Initializing NVMe Controllers 00:17:02.126 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:02.126 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:02.126 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:02.126 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:02.126 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:02.126 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:02.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:02.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:02.126 Initialization complete. Launching workers. 00:17:02.126 Starting thread on core 1 with urgent priority queue 00:17:02.126 Starting thread on core 2 with urgent priority queue 00:17:02.126 Starting thread on core 3 with urgent priority queue 00:17:02.126 Starting thread on core 0 with urgent priority queue 00:17:02.126 SPDK bdev Controller (SPDK1 ) core 0: 3760.00 IO/s 26.60 secs/100000 ios 00:17:02.126 SPDK bdev Controller (SPDK1 ) core 1: 3835.67 IO/s 26.07 secs/100000 ios 00:17:02.126 SPDK bdev Controller (SPDK1 ) core 2: 3816.33 IO/s 26.20 secs/100000 ios 00:17:02.126 SPDK bdev Controller (SPDK1 ) core 3: 4521.67 IO/s 22.12 secs/100000 ios 00:17:02.126 ======================================================== 00:17:02.126 00:17:02.126 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:02.126 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.126 [2024-07-25 16:56:22.172630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:02.126 Initializing NVMe Controllers 00:17:02.126 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:02.126 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:02.126 Namespace ID: 1 size: 0GB 00:17:02.126 Initialization complete. 00:17:02.126 INFO: using host memory buffer for IO 00:17:02.126 Hello world! 00:17:02.126 [2024-07-25 16:56:22.206844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:02.126 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:02.126 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.387 [2024-07-25 16:56:22.470706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:03.329 Initializing NVMe Controllers 00:17:03.329 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.329 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.329 Initialization complete. Launching workers. 00:17:03.329 submit (in ns) avg, min, max = 8446.2, 3897.5, 4042211.7 00:17:03.329 complete (in ns) avg, min, max = 17319.5, 2391.7, 4029182.5 00:17:03.329 00:17:03.329 Submit histogram 00:17:03.329 ================ 00:17:03.329 Range in us Cumulative Count 00:17:03.329 3.893 - 3.920: 0.8046% ( 154) 00:17:03.329 3.920 - 3.947: 4.7072% ( 747) 00:17:03.329 3.947 - 3.973: 15.1873% ( 2006) 00:17:03.329 3.973 - 4.000: 27.5691% ( 2370) 00:17:03.329 4.000 - 4.027: 39.0627% ( 2200) 00:17:03.329 4.027 - 4.053: 50.3683% ( 2164) 00:17:03.329 4.053 - 4.080: 66.1251% ( 3016) 00:17:03.329 4.080 - 4.107: 80.9205% ( 2832) 00:17:03.329 4.107 - 4.133: 91.6201% ( 2048) 00:17:03.329 4.133 - 4.160: 97.0378% ( 1037) 00:17:03.329 4.160 - 4.187: 98.8141% ( 340) 00:17:03.329 4.187 - 4.213: 99.3313% ( 99) 00:17:03.329 4.213 - 4.240: 99.4201% ( 17) 00:17:03.330 4.240 - 4.267: 99.4410% ( 4) 00:17:03.330 4.267 - 4.293: 99.4462% ( 1) 00:17:03.330 4.320 - 4.347: 99.4514% ( 1) 00:17:03.330 4.373 - 4.400: 99.4567% ( 1) 00:17:03.330 4.480 - 4.507: 99.4619% ( 1) 00:17:03.330 4.667 - 4.693: 99.4671% ( 1) 00:17:03.330 4.720 - 4.747: 99.4723% ( 1) 00:17:03.330 4.773 - 4.800: 99.4776% ( 1) 00:17:03.330 4.827 - 4.853: 99.4828% ( 1) 00:17:03.330 4.853 - 4.880: 99.4880% ( 1) 00:17:03.330 4.880 - 4.907: 99.4932% ( 1) 00:17:03.330 4.933 - 4.960: 99.4985% ( 1) 00:17:03.330 5.040 - 5.067: 99.5089% ( 2) 00:17:03.330 5.173 - 5.200: 99.5141% ( 1) 00:17:03.330 5.200 - 5.227: 99.5194% ( 1) 00:17:03.330 5.333 - 5.360: 99.5298% ( 2) 00:17:03.330 5.440 - 5.467: 99.5350% ( 1) 00:17:03.330 5.467 - 5.493: 99.5403% ( 1) 00:17:03.330 5.733 - 5.760: 99.5455% ( 1) 00:17:03.330 5.787 - 5.813: 99.5507% ( 1) 00:17:03.330 5.840 - 5.867: 99.5559% ( 1) 00:17:03.330 5.893 - 5.920: 99.5612% ( 1) 00:17:03.330 6.027 - 6.053: 99.5664% ( 1) 00:17:03.330 6.053 - 6.080: 99.5768% ( 2) 00:17:03.330 6.080 - 6.107: 99.5820% ( 1) 00:17:03.330 6.107 - 6.133: 99.5873% ( 1) 00:17:03.330 6.160 - 6.187: 99.6186% ( 6) 00:17:03.330 6.187 - 6.213: 99.6238% ( 1) 00:17:03.330 6.240 - 6.267: 99.6291% ( 1) 00:17:03.330 6.293 - 6.320: 99.6343% ( 1) 00:17:03.330 6.347 - 6.373: 99.6447% ( 2) 00:17:03.330 6.373 - 6.400: 99.6500% ( 1) 00:17:03.330 6.427 - 6.453: 99.6552% ( 1) 00:17:03.330 6.587 - 6.613: 99.6604% ( 1) 00:17:03.330 6.667 - 6.693: 99.6656% ( 1) 00:17:03.330 7.360 - 7.413: 99.6761% ( 2) 00:17:03.330 7.413 - 7.467: 99.6918% ( 3) 00:17:03.330 7.467 - 7.520: 99.7022% ( 2) 00:17:03.330 7.573 - 7.627: 99.7074% ( 1) 00:17:03.330 7.627 - 7.680: 99.7127% ( 1) 00:17:03.330 7.680 - 7.733: 99.7179% ( 1) 00:17:03.330 7.733 - 7.787: 99.7231% ( 1) 00:17:03.330 7.840 - 7.893: 99.7283% ( 1) 00:17:03.330 7.893 - 7.947: 99.7440% ( 3) 00:17:03.330 7.947 - 8.000: 99.7492% ( 1) 00:17:03.330 8.053 - 8.107: 99.7597% ( 2) 00:17:03.330 8.107 - 8.160: 99.7649% ( 1) 00:17:03.330 8.267 - 8.320: 99.7701% ( 1) 00:17:03.330 8.373 - 8.427: 99.7754% ( 1) 00:17:03.330 8.427 - 8.480: 99.7858% ( 2) 00:17:03.330 8.480 - 8.533: 99.7910% ( 1) 00:17:03.330 8.533 - 8.587: 99.7962% ( 1) 00:17:03.330 8.640 - 8.693: 99.8015% ( 1) 00:17:03.330 8.693 - 8.747: 99.8067% ( 1) 00:17:03.330 8.747 - 8.800: 99.8119% ( 1) 00:17:03.330 8.800 - 8.853: 99.8171% ( 1) 00:17:03.330 8.853 - 8.907: 99.8276% ( 2) 00:17:03.330 8.907 - 8.960: 99.8433% ( 3) 00:17:03.330 8.960 - 9.013: 99.8485% ( 1) 00:17:03.330 9.013 - 9.067: 99.8589% ( 2) 00:17:03.330 9.493 - 9.547: 99.8642% ( 1) 00:17:03.330 9.707 - 9.760: 99.8694% ( 1) 00:17:03.330 10.187 - 10.240: 99.8746% ( 1) 00:17:03.330 10.347 - 10.400: 99.8798% ( 1) 00:17:03.330 13.440 - 13.493: 99.8851% ( 1) 00:17:03.330 15.147 - 15.253: 99.8903% ( 1) 00:17:03.330 [2024-07-25 16:56:23.491065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:03.330 3986.773 - 4014.080: 99.9948% ( 20) 00:17:03.330 4041.387 - 4068.693: 100.0000% ( 1) 00:17:03.330 00:17:03.330 Complete histogram 00:17:03.330 ================== 00:17:03.330 Range in us Cumulative Count 00:17:03.330 2.387 - 2.400: 0.2612% ( 50) 00:17:03.330 2.400 - 2.413: 1.0762% ( 156) 00:17:03.330 2.413 - 2.427: 1.2121% ( 26) 00:17:03.330 2.427 - 2.440: 1.3061% ( 18) 00:17:03.330 2.440 - 2.453: 12.1310% ( 2072) 00:17:03.330 2.453 - 2.467: 52.0036% ( 7632) 00:17:03.330 2.467 - 2.480: 63.4136% ( 2184) 00:17:03.330 2.480 - 2.493: 77.2948% ( 2657) 00:17:03.330 2.493 - 2.507: 80.9832% ( 706) 00:17:03.330 2.507 - 2.520: 82.6603% ( 321) 00:17:03.330 2.520 - 2.533: 88.1877% ( 1058) 00:17:03.330 2.533 - 2.547: 93.4800% ( 1013) 00:17:03.330 2.547 - 2.560: 96.3586% ( 551) 00:17:03.330 2.560 - 2.573: 98.4954% ( 409) 00:17:03.330 2.573 - 2.587: 99.0648% ( 109) 00:17:03.330 2.587 - 2.600: 99.3313% ( 51) 00:17:03.330 2.600 - 2.613: 99.3574% ( 5) 00:17:03.330 2.613 - 2.627: 99.3731% ( 3) 00:17:03.330 2.627 - 2.640: 99.3783% ( 1) 00:17:03.330 4.560 - 4.587: 99.3835% ( 1) 00:17:03.330 4.613 - 4.640: 99.3887% ( 1) 00:17:03.330 4.640 - 4.667: 99.3940% ( 1) 00:17:03.330 4.773 - 4.800: 99.3992% ( 1) 00:17:03.330 4.880 - 4.907: 99.4044% ( 1) 00:17:03.330 5.333 - 5.360: 99.4096% ( 1) 00:17:03.330 5.520 - 5.547: 99.4149% ( 1) 00:17:03.330 5.547 - 5.573: 99.4201% ( 1) 00:17:03.330 5.627 - 5.653: 99.4253% ( 1) 00:17:03.330 5.707 - 5.733: 99.4305% ( 1) 00:17:03.330 5.733 - 5.760: 99.4358% ( 1) 00:17:03.330 5.893 - 5.920: 99.4462% ( 2) 00:17:03.330 5.947 - 5.973: 99.4619% ( 3) 00:17:03.330 5.973 - 6.000: 99.4671% ( 1) 00:17:03.330 6.053 - 6.080: 99.4723% ( 1) 00:17:03.330 6.080 - 6.107: 99.4776% ( 1) 00:17:03.330 6.240 - 6.267: 99.4828% ( 1) 00:17:03.330 6.347 - 6.373: 99.4880% ( 1) 00:17:03.330 6.427 - 6.453: 99.4932% ( 1) 00:17:03.330 6.533 - 6.560: 99.5037% ( 2) 00:17:03.330 6.640 - 6.667: 99.5141% ( 2) 00:17:03.330 6.720 - 6.747: 99.5194% ( 1) 00:17:03.330 6.747 - 6.773: 99.5246% ( 1) 00:17:03.330 6.773 - 6.800: 99.5298% ( 1) 00:17:03.330 6.827 - 6.880: 99.5403% ( 2) 00:17:03.330 6.880 - 6.933: 99.5455% ( 1) 00:17:03.330 6.987 - 7.040: 99.5507% ( 1) 00:17:03.330 7.200 - 7.253: 99.5559% ( 1) 00:17:03.330 7.253 - 7.307: 99.5612% ( 1) 00:17:03.330 7.307 - 7.360: 99.5768% ( 3) 00:17:03.330 7.680 - 7.733: 99.5820% ( 1) 00:17:03.330 7.733 - 7.787: 99.5873% ( 1) 00:17:03.330 7.840 - 7.893: 99.5925% ( 1) 00:17:03.330 7.893 - 7.947: 99.5977% ( 1) 00:17:03.330 8.107 - 8.160: 99.6029% ( 1) 00:17:03.330 10.507 - 10.560: 99.6082% ( 1) 00:17:03.330 12.427 - 12.480: 99.6134% ( 1) 00:17:03.330 12.533 - 12.587: 99.6186% ( 1) 00:17:03.330 12.907 - 12.960: 99.6238% ( 1) 00:17:03.330 139.093 - 139.947: 99.6291% ( 1) 00:17:03.330 3986.773 - 4014.080: 99.9948% ( 70) 00:17:03.330 4014.080 - 4041.387: 100.0000% ( 1) 00:17:03.330 00:17:03.330 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:03.330 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:03.330 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:03.330 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:03.330 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:03.592 [ 00:17:03.592 { 00:17:03.592 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:03.592 "subtype": "Discovery", 00:17:03.592 "listen_addresses": [], 00:17:03.592 "allow_any_host": true, 00:17:03.592 "hosts": [] 00:17:03.592 }, 00:17:03.592 { 00:17:03.592 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:03.592 "subtype": "NVMe", 00:17:03.592 "listen_addresses": [ 00:17:03.592 { 00:17:03.592 "trtype": "VFIOUSER", 00:17:03.592 "adrfam": "IPv4", 00:17:03.592 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:03.592 "trsvcid": "0" 00:17:03.592 } 00:17:03.592 ], 00:17:03.592 "allow_any_host": true, 00:17:03.592 "hosts": [], 00:17:03.592 "serial_number": "SPDK1", 00:17:03.592 "model_number": "SPDK bdev Controller", 00:17:03.592 "max_namespaces": 32, 00:17:03.592 "min_cntlid": 1, 00:17:03.592 "max_cntlid": 65519, 00:17:03.592 "namespaces": [ 00:17:03.592 { 00:17:03.592 "nsid": 1, 00:17:03.592 "bdev_name": "Malloc1", 00:17:03.592 "name": "Malloc1", 00:17:03.592 "nguid": "7CD6327F34E3445D879252C99BA37F9E", 00:17:03.592 "uuid": "7cd6327f-34e3-445d-8792-52c99ba37f9e" 00:17:03.592 } 00:17:03.592 ] 00:17:03.592 }, 00:17:03.592 { 00:17:03.592 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:03.592 "subtype": "NVMe", 00:17:03.593 "listen_addresses": [ 00:17:03.593 { 00:17:03.593 "trtype": "VFIOUSER", 00:17:03.593 "adrfam": "IPv4", 00:17:03.593 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:03.593 "trsvcid": "0" 00:17:03.593 } 00:17:03.593 ], 00:17:03.593 "allow_any_host": true, 00:17:03.593 "hosts": [], 00:17:03.593 "serial_number": "SPDK2", 00:17:03.593 "model_number": "SPDK bdev Controller", 00:17:03.593 "max_namespaces": 32, 00:17:03.593 "min_cntlid": 1, 00:17:03.593 "max_cntlid": 65519, 00:17:03.593 "namespaces": [ 00:17:03.593 { 00:17:03.593 "nsid": 1, 00:17:03.593 "bdev_name": "Malloc2", 00:17:03.593 "name": "Malloc2", 00:17:03.593 "nguid": "347A07E015AD42C4BFD919AB34C1E4AC", 00:17:03.593 "uuid": "347a07e0-15ad-42c4-bfd9-19ab34c1e4ac" 00:17:03.593 } 00:17:03.593 ] 00:17:03.593 } 00:17:03.593 ] 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1405306 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:03.593 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:03.593 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.854 Malloc3 00:17:03.854 [2024-07-25 16:56:23.868243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:03.854 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:03.854 [2024-07-25 16:56:24.045455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:03.854 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:03.854 Asynchronous Event Request test 00:17:03.854 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.854 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:03.854 Registering asynchronous event callbacks... 00:17:03.854 Starting namespace attribute notice tests for all controllers... 00:17:03.854 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:03.854 aer_cb - Changed Namespace 00:17:03.854 Cleaning up... 00:17:04.116 [ 00:17:04.116 { 00:17:04.116 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.116 "subtype": "Discovery", 00:17:04.116 "listen_addresses": [], 00:17:04.116 "allow_any_host": true, 00:17:04.116 "hosts": [] 00:17:04.116 }, 00:17:04.116 { 00:17:04.116 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:04.116 "subtype": "NVMe", 00:17:04.116 "listen_addresses": [ 00:17:04.116 { 00:17:04.116 "trtype": "VFIOUSER", 00:17:04.116 "adrfam": "IPv4", 00:17:04.116 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:04.116 "trsvcid": "0" 00:17:04.116 } 00:17:04.116 ], 00:17:04.116 "allow_any_host": true, 00:17:04.116 "hosts": [], 00:17:04.116 "serial_number": "SPDK1", 00:17:04.116 "model_number": "SPDK bdev Controller", 00:17:04.116 "max_namespaces": 32, 00:17:04.116 "min_cntlid": 1, 00:17:04.116 "max_cntlid": 65519, 00:17:04.116 "namespaces": [ 00:17:04.116 { 00:17:04.116 "nsid": 1, 00:17:04.116 "bdev_name": "Malloc1", 00:17:04.116 "name": "Malloc1", 00:17:04.116 "nguid": "7CD6327F34E3445D879252C99BA37F9E", 00:17:04.116 "uuid": "7cd6327f-34e3-445d-8792-52c99ba37f9e" 00:17:04.116 }, 00:17:04.116 { 00:17:04.116 "nsid": 2, 00:17:04.116 "bdev_name": "Malloc3", 00:17:04.116 "name": "Malloc3", 00:17:04.116 "nguid": "0C186F71583B4A508B4376984BFAF401", 00:17:04.116 "uuid": "0c186f71-583b-4a50-8b43-76984bfaf401" 00:17:04.116 } 00:17:04.116 ] 00:17:04.116 }, 00:17:04.116 { 00:17:04.116 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:04.116 "subtype": "NVMe", 00:17:04.116 "listen_addresses": [ 00:17:04.116 { 00:17:04.116 "trtype": "VFIOUSER", 00:17:04.116 "adrfam": "IPv4", 00:17:04.116 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:04.116 "trsvcid": "0" 00:17:04.116 } 00:17:04.116 ], 00:17:04.116 "allow_any_host": true, 00:17:04.116 "hosts": [], 00:17:04.116 "serial_number": "SPDK2", 00:17:04.116 "model_number": "SPDK bdev Controller", 00:17:04.116 "max_namespaces": 32, 00:17:04.116 "min_cntlid": 1, 00:17:04.116 "max_cntlid": 65519, 00:17:04.116 "namespaces": [ 00:17:04.116 { 00:17:04.116 "nsid": 1, 00:17:04.116 "bdev_name": "Malloc2", 00:17:04.116 "name": "Malloc2", 00:17:04.116 "nguid": "347A07E015AD42C4BFD919AB34C1E4AC", 00:17:04.116 "uuid": "347a07e0-15ad-42c4-bfd9-19ab34c1e4ac" 00:17:04.116 } 00:17:04.116 ] 00:17:04.116 } 00:17:04.116 ] 00:17:04.116 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1405306 00:17:04.116 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:04.116 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:04.116 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:04.116 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:04.116 [2024-07-25 16:56:24.270128] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:17:04.116 [2024-07-25 16:56:24.270192] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405443 ] 00:17:04.116 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.116 [2024-07-25 16:56:24.303771] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:04.116 [2024-07-25 16:56:24.308994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:04.116 [2024-07-25 16:56:24.309016] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe9fdff5000 00:17:04.116 [2024-07-25 16:56:24.309991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.310995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.312005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.313012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.314021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.315025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.316031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.317039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.116 [2024-07-25 16:56:24.318048] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:04.116 [2024-07-25 16:56:24.318058] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe9fdfea000 00:17:04.116 [2024-07-25 16:56:24.319390] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:04.116 [2024-07-25 16:56:24.335614] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:04.116 [2024-07-25 16:56:24.335634] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:04.116 [2024-07-25 16:56:24.340714] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:04.116 [2024-07-25 16:56:24.340758] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:04.116 [2024-07-25 16:56:24.340840] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:04.116 [2024-07-25 16:56:24.340855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:04.116 [2024-07-25 16:56:24.340860] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:04.116 [2024-07-25 16:56:24.341716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:04.116 [2024-07-25 16:56:24.341728] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:04.116 [2024-07-25 16:56:24.341735] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:04.116 [2024-07-25 16:56:24.342726] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:04.117 [2024-07-25 16:56:24.342736] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:04.117 [2024-07-25 16:56:24.342743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.117 [2024-07-25 16:56:24.343727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:04.117 [2024-07-25 16:56:24.343737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.117 [2024-07-25 16:56:24.344733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:04.117 [2024-07-25 16:56:24.344741] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:04.117 [2024-07-25 16:56:24.344746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:04.117 [2024-07-25 16:56:24.344753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.117 [2024-07-25 16:56:24.344859] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:04.117 [2024-07-25 16:56:24.344864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.117 [2024-07-25 16:56:24.344869] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:04.117 [2024-07-25 16:56:24.345744] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:04.117 [2024-07-25 16:56:24.346745] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:04.117 [2024-07-25 16:56:24.347753] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:04.117 [2024-07-25 16:56:24.348757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:04.117 [2024-07-25 16:56:24.348797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.117 [2024-07-25 16:56:24.349774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:04.117 [2024-07-25 16:56:24.349782] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.117 [2024-07-25 16:56:24.349787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.349809] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:04.117 [2024-07-25 16:56:24.349820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.349833] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.117 [2024-07-25 16:56:24.349838] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.117 [2024-07-25 16:56:24.349842] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.117 [2024-07-25 16:56:24.349854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.117 [2024-07-25 16:56:24.357211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:04.117 [2024-07-25 16:56:24.357222] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:04.117 [2024-07-25 16:56:24.357227] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:04.117 [2024-07-25 16:56:24.357232] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:04.117 [2024-07-25 16:56:24.357236] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:04.117 [2024-07-25 16:56:24.357241] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:04.117 [2024-07-25 16:56:24.357245] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:04.117 [2024-07-25 16:56:24.357250] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.357258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.357271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:04.117 [2024-07-25 16:56:24.365207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:04.117 [2024-07-25 16:56:24.365224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.117 [2024-07-25 16:56:24.365233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.117 [2024-07-25 16:56:24.365241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.117 [2024-07-25 16:56:24.365251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.117 [2024-07-25 16:56:24.365256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.365264] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.365274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:04.117 [2024-07-25 16:56:24.373208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:04.117 [2024-07-25 16:56:24.373216] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:04.117 [2024-07-25 16:56:24.373221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.373231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.373238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.373246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:04.117 [2024-07-25 16:56:24.381211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:04.117 [2024-07-25 16:56:24.381277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.381285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.117 [2024-07-25 16:56:24.381293] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:04.117 [2024-07-25 16:56:24.381297] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:04.117 [2024-07-25 16:56:24.381301] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.117 [2024-07-25 16:56:24.381307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:04.379 [2024-07-25 16:56:24.389209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:04.379 [2024-07-25 16:56:24.389221] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:04.379 [2024-07-25 16:56:24.389229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:04.379 [2024-07-25 16:56:24.389237] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.379 [2024-07-25 16:56:24.389244] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.379 [2024-07-25 16:56:24.389248] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.379 [2024-07-25 16:56:24.389252] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.379 [2024-07-25 16:56:24.389258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.379 [2024-07-25 16:56:24.397210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:04.379 [2024-07-25 16:56:24.397226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.379 [2024-07-25 16:56:24.397234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.379 [2024-07-25 16:56:24.397241] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.379 [2024-07-25 16:56:24.397245] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.379 [2024-07-25 16:56:24.397249] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.379 [2024-07-25 16:56:24.397255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.379 [2024-07-25 16:56:24.405210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.405220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405242] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405252] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405257] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.380 [2024-07-25 16:56:24.405261] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:04.380 [2024-07-25 16:56:24.405266] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:04.380 [2024-07-25 16:56:24.405284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.413210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.413224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.421209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.421222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.429208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.429222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.437209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.437225] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:04.380 [2024-07-25 16:56:24.437232] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:04.380 [2024-07-25 16:56:24.437236] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:04.380 [2024-07-25 16:56:24.437240] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:04.380 [2024-07-25 16:56:24.437243] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:04.380 [2024-07-25 16:56:24.437249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:04.380 [2024-07-25 16:56:24.437257] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:04.380 [2024-07-25 16:56:24.437261] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:04.380 [2024-07-25 16:56:24.437265] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.380 [2024-07-25 16:56:24.437270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.437278] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:04.380 [2024-07-25 16:56:24.437282] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.380 [2024-07-25 16:56:24.437285] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.380 [2024-07-25 16:56:24.437291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.437299] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:04.380 [2024-07-25 16:56:24.437303] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:04.380 [2024-07-25 16:56:24.437307] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.380 [2024-07-25 16:56:24.437312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:04.380 [2024-07-25 16:56:24.445210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.445224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.445235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:04.380 [2024-07-25 16:56:24.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:04.380 ===================================================== 00:17:04.380 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:04.380 ===================================================== 00:17:04.380 Controller Capabilities/Features 00:17:04.380 ================================ 00:17:04.380 Vendor ID: 4e58 00:17:04.380 Subsystem Vendor ID: 4e58 00:17:04.380 Serial Number: SPDK2 00:17:04.380 Model Number: SPDK bdev Controller 00:17:04.380 Firmware Version: 24.09 00:17:04.380 Recommended Arb Burst: 6 00:17:04.380 IEEE OUI Identifier: 8d 6b 50 00:17:04.380 Multi-path I/O 00:17:04.380 May have multiple subsystem ports: Yes 00:17:04.380 May have multiple controllers: Yes 00:17:04.380 Associated with SR-IOV VF: No 00:17:04.380 Max Data Transfer Size: 131072 00:17:04.380 Max Number of Namespaces: 32 00:17:04.380 Max Number of I/O Queues: 127 00:17:04.380 NVMe Specification Version (VS): 1.3 00:17:04.380 NVMe Specification Version (Identify): 1.3 00:17:04.380 Maximum Queue Entries: 256 00:17:04.380 Contiguous Queues Required: Yes 00:17:04.380 Arbitration Mechanisms Supported 00:17:04.380 Weighted Round Robin: Not Supported 00:17:04.380 Vendor Specific: Not Supported 00:17:04.380 Reset Timeout: 15000 ms 00:17:04.380 Doorbell Stride: 4 bytes 00:17:04.380 NVM Subsystem Reset: Not Supported 00:17:04.380 Command Sets Supported 00:17:04.380 NVM Command Set: Supported 00:17:04.380 Boot Partition: Not Supported 00:17:04.380 Memory Page Size Minimum: 4096 bytes 00:17:04.380 Memory Page Size Maximum: 4096 bytes 00:17:04.380 Persistent Memory Region: Not Supported 00:17:04.380 Optional Asynchronous Events Supported 00:17:04.380 Namespace Attribute Notices: Supported 00:17:04.380 Firmware Activation Notices: Not Supported 00:17:04.380 ANA Change Notices: Not Supported 00:17:04.380 PLE Aggregate Log Change Notices: Not Supported 00:17:04.380 LBA Status Info Alert Notices: Not Supported 00:17:04.380 EGE Aggregate Log Change Notices: Not Supported 00:17:04.380 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.380 Zone Descriptor Change Notices: Not Supported 00:17:04.380 Discovery Log Change Notices: Not Supported 00:17:04.380 Controller Attributes 00:17:04.380 128-bit Host Identifier: Supported 00:17:04.380 Non-Operational Permissive Mode: Not Supported 00:17:04.380 NVM Sets: Not Supported 00:17:04.380 Read Recovery Levels: Not Supported 00:17:04.380 Endurance Groups: Not Supported 00:17:04.380 Predictable Latency Mode: Not Supported 00:17:04.380 Traffic Based Keep ALive: Not Supported 00:17:04.380 Namespace Granularity: Not Supported 00:17:04.380 SQ Associations: Not Supported 00:17:04.380 UUID List: Not Supported 00:17:04.380 Multi-Domain Subsystem: Not Supported 00:17:04.380 Fixed Capacity Management: Not Supported 00:17:04.380 Variable Capacity Management: Not Supported 00:17:04.380 Delete Endurance Group: Not Supported 00:17:04.380 Delete NVM Set: Not Supported 00:17:04.380 Extended LBA Formats Supported: Not Supported 00:17:04.380 Flexible Data Placement Supported: Not Supported 00:17:04.380 00:17:04.380 Controller Memory Buffer Support 00:17:04.380 ================================ 00:17:04.380 Supported: No 00:17:04.380 00:17:04.380 Persistent Memory Region Support 00:17:04.380 ================================ 00:17:04.380 Supported: No 00:17:04.380 00:17:04.380 Admin Command Set Attributes 00:17:04.380 ============================ 00:17:04.380 Security Send/Receive: Not Supported 00:17:04.380 Format NVM: Not Supported 00:17:04.380 Firmware Activate/Download: Not Supported 00:17:04.380 Namespace Management: Not Supported 00:17:04.380 Device Self-Test: Not Supported 00:17:04.380 Directives: Not Supported 00:17:04.380 NVMe-MI: Not Supported 00:17:04.380 Virtualization Management: Not Supported 00:17:04.380 Doorbell Buffer Config: Not Supported 00:17:04.380 Get LBA Status Capability: Not Supported 00:17:04.380 Command & Feature Lockdown Capability: Not Supported 00:17:04.380 Abort Command Limit: 4 00:17:04.380 Async Event Request Limit: 4 00:17:04.380 Number of Firmware Slots: N/A 00:17:04.380 Firmware Slot 1 Read-Only: N/A 00:17:04.380 Firmware Activation Without Reset: N/A 00:17:04.380 Multiple Update Detection Support: N/A 00:17:04.380 Firmware Update Granularity: No Information Provided 00:17:04.380 Per-Namespace SMART Log: No 00:17:04.380 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.380 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:04.380 Command Effects Log Page: Supported 00:17:04.380 Get Log Page Extended Data: Supported 00:17:04.380 Telemetry Log Pages: Not Supported 00:17:04.381 Persistent Event Log Pages: Not Supported 00:17:04.381 Supported Log Pages Log Page: May Support 00:17:04.381 Commands Supported & Effects Log Page: Not Supported 00:17:04.381 Feature Identifiers & Effects Log Page:May Support 00:17:04.381 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.381 Data Area 4 for Telemetry Log: Not Supported 00:17:04.381 Error Log Page Entries Supported: 128 00:17:04.381 Keep Alive: Supported 00:17:04.381 Keep Alive Granularity: 10000 ms 00:17:04.381 00:17:04.381 NVM Command Set Attributes 00:17:04.381 ========================== 00:17:04.381 Submission Queue Entry Size 00:17:04.381 Max: 64 00:17:04.381 Min: 64 00:17:04.381 Completion Queue Entry Size 00:17:04.381 Max: 16 00:17:04.381 Min: 16 00:17:04.381 Number of Namespaces: 32 00:17:04.381 Compare Command: Supported 00:17:04.381 Write Uncorrectable Command: Not Supported 00:17:04.381 Dataset Management Command: Supported 00:17:04.381 Write Zeroes Command: Supported 00:17:04.381 Set Features Save Field: Not Supported 00:17:04.381 Reservations: Not Supported 00:17:04.381 Timestamp: Not Supported 00:17:04.381 Copy: Supported 00:17:04.381 Volatile Write Cache: Present 00:17:04.381 Atomic Write Unit (Normal): 1 00:17:04.381 Atomic Write Unit (PFail): 1 00:17:04.381 Atomic Compare & Write Unit: 1 00:17:04.381 Fused Compare & Write: Supported 00:17:04.381 Scatter-Gather List 00:17:04.381 SGL Command Set: Supported (Dword aligned) 00:17:04.381 SGL Keyed: Not Supported 00:17:04.381 SGL Bit Bucket Descriptor: Not Supported 00:17:04.381 SGL Metadata Pointer: Not Supported 00:17:04.381 Oversized SGL: Not Supported 00:17:04.381 SGL Metadata Address: Not Supported 00:17:04.381 SGL Offset: Not Supported 00:17:04.381 Transport SGL Data Block: Not Supported 00:17:04.381 Replay Protected Memory Block: Not Supported 00:17:04.381 00:17:04.381 Firmware Slot Information 00:17:04.381 ========================= 00:17:04.381 Active slot: 1 00:17:04.381 Slot 1 Firmware Revision: 24.09 00:17:04.381 00:17:04.381 00:17:04.381 Commands Supported and Effects 00:17:04.381 ============================== 00:17:04.381 Admin Commands 00:17:04.381 -------------- 00:17:04.381 Get Log Page (02h): Supported 00:17:04.381 Identify (06h): Supported 00:17:04.381 Abort (08h): Supported 00:17:04.381 Set Features (09h): Supported 00:17:04.381 Get Features (0Ah): Supported 00:17:04.381 Asynchronous Event Request (0Ch): Supported 00:17:04.381 Keep Alive (18h): Supported 00:17:04.381 I/O Commands 00:17:04.381 ------------ 00:17:04.381 Flush (00h): Supported LBA-Change 00:17:04.381 Write (01h): Supported LBA-Change 00:17:04.381 Read (02h): Supported 00:17:04.381 Compare (05h): Supported 00:17:04.381 Write Zeroes (08h): Supported LBA-Change 00:17:04.381 Dataset Management (09h): Supported LBA-Change 00:17:04.381 Copy (19h): Supported LBA-Change 00:17:04.381 00:17:04.381 Error Log 00:17:04.381 ========= 00:17:04.381 00:17:04.381 Arbitration 00:17:04.381 =========== 00:17:04.381 Arbitration Burst: 1 00:17:04.381 00:17:04.381 Power Management 00:17:04.381 ================ 00:17:04.381 Number of Power States: 1 00:17:04.381 Current Power State: Power State #0 00:17:04.381 Power State #0: 00:17:04.381 Max Power: 0.00 W 00:17:04.381 Non-Operational State: Operational 00:17:04.381 Entry Latency: Not Reported 00:17:04.381 Exit Latency: Not Reported 00:17:04.381 Relative Read Throughput: 0 00:17:04.381 Relative Read Latency: 0 00:17:04.381 Relative Write Throughput: 0 00:17:04.381 Relative Write Latency: 0 00:17:04.381 Idle Power: Not Reported 00:17:04.381 Active Power: Not Reported 00:17:04.381 Non-Operational Permissive Mode: Not Supported 00:17:04.381 00:17:04.381 Health Information 00:17:04.381 ================== 00:17:04.381 Critical Warnings: 00:17:04.381 Available Spare Space: OK 00:17:04.381 Temperature: OK 00:17:04.381 Device Reliability: OK 00:17:04.381 Read Only: No 00:17:04.381 Volatile Memory Backup: OK 00:17:04.381 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.381 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.381 Available Spare: 0% 00:17:04.381 Available Sp[2024-07-25 16:56:24.445341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:04.381 [2024-07-25 16:56:24.453211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:04.381 [2024-07-25 16:56:24.453242] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:04.381 [2024-07-25 16:56:24.453251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.381 [2024-07-25 16:56:24.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.381 [2024-07-25 16:56:24.453264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.381 [2024-07-25 16:56:24.453270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.381 [2024-07-25 16:56:24.453320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:04.381 [2024-07-25 16:56:24.453330] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:04.381 [2024-07-25 16:56:24.454326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:04.381 [2024-07-25 16:56:24.454375] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:04.381 [2024-07-25 16:56:24.454382] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:04.381 [2024-07-25 16:56:24.455327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:04.381 [2024-07-25 16:56:24.455338] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:04.381 [2024-07-25 16:56:24.455386] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:04.381 [2024-07-25 16:56:24.456761] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:04.381 are Threshold: 0% 00:17:04.381 Life Percentage Used: 0% 00:17:04.381 Data Units Read: 0 00:17:04.381 Data Units Written: 0 00:17:04.381 Host Read Commands: 0 00:17:04.381 Host Write Commands: 0 00:17:04.381 Controller Busy Time: 0 minutes 00:17:04.381 Power Cycles: 0 00:17:04.381 Power On Hours: 0 hours 00:17:04.381 Unsafe Shutdowns: 0 00:17:04.381 Unrecoverable Media Errors: 0 00:17:04.381 Lifetime Error Log Entries: 0 00:17:04.381 Warning Temperature Time: 0 minutes 00:17:04.381 Critical Temperature Time: 0 minutes 00:17:04.381 00:17:04.381 Number of Queues 00:17:04.381 ================ 00:17:04.381 Number of I/O Submission Queues: 127 00:17:04.381 Number of I/O Completion Queues: 127 00:17:04.381 00:17:04.381 Active Namespaces 00:17:04.381 ================= 00:17:04.381 Namespace ID:1 00:17:04.381 Error Recovery Timeout: Unlimited 00:17:04.381 Command Set Identifier: NVM (00h) 00:17:04.381 Deallocate: Supported 00:17:04.381 Deallocated/Unwritten Error: Not Supported 00:17:04.381 Deallocated Read Value: Unknown 00:17:04.381 Deallocate in Write Zeroes: Not Supported 00:17:04.381 Deallocated Guard Field: 0xFFFF 00:17:04.381 Flush: Supported 00:17:04.381 Reservation: Supported 00:17:04.381 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.381 Size (in LBAs): 131072 (0GiB) 00:17:04.381 Capacity (in LBAs): 131072 (0GiB) 00:17:04.381 Utilization (in LBAs): 131072 (0GiB) 00:17:04.381 NGUID: 347A07E015AD42C4BFD919AB34C1E4AC 00:17:04.381 UUID: 347a07e0-15ad-42c4-bfd9-19ab34c1e4ac 00:17:04.381 Thin Provisioning: Not Supported 00:17:04.381 Per-NS Atomic Units: Yes 00:17:04.381 Atomic Boundary Size (Normal): 0 00:17:04.381 Atomic Boundary Size (PFail): 0 00:17:04.381 Atomic Boundary Offset: 0 00:17:04.381 Maximum Single Source Range Length: 65535 00:17:04.381 Maximum Copy Length: 65535 00:17:04.381 Maximum Source Range Count: 1 00:17:04.381 NGUID/EUI64 Never Reused: No 00:17:04.381 Namespace Write Protected: No 00:17:04.381 Number of LBA Formats: 1 00:17:04.381 Current LBA Format: LBA Format #00 00:17:04.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.381 00:17:04.381 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:04.381 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.381 [2024-07-25 16:56:24.642266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:09.671 Initializing NVMe Controllers 00:17:09.671 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:09.671 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:09.671 Initialization complete. Launching workers. 00:17:09.671 ======================================================== 00:17:09.671 Latency(us) 00:17:09.671 Device Information : IOPS MiB/s Average min max 00:17:09.671 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39981.40 156.18 3203.87 840.54 6811.31 00:17:09.671 ======================================================== 00:17:09.671 Total : 39981.40 156.18 3203.87 840.54 6811.31 00:17:09.671 00:17:09.671 [2024-07-25 16:56:29.750386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:09.671 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:09.671 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.671 [2024-07-25 16:56:29.931935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:15.045 Initializing NVMe Controllers 00:17:15.045 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:15.045 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:15.045 Initialization complete. Launching workers. 00:17:15.045 ======================================================== 00:17:15.045 Latency(us) 00:17:15.045 Device Information : IOPS MiB/s Average min max 00:17:15.045 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35360.10 138.13 3619.48 1103.54 7443.93 00:17:15.045 ======================================================== 00:17:15.045 Total : 35360.10 138.13 3619.48 1103.54 7443.93 00:17:15.045 00:17:15.045 [2024-07-25 16:56:34.951887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:15.045 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:15.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.045 [2024-07-25 16:56:35.134582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:20.337 [2024-07-25 16:56:40.273343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:20.337 Initializing NVMe Controllers 00:17:20.337 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:20.337 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:20.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:20.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:20.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:20.337 Initialization complete. Launching workers. 00:17:20.337 Starting thread on core 2 00:17:20.337 Starting thread on core 3 00:17:20.337 Starting thread on core 1 00:17:20.337 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:20.337 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.337 [2024-07-25 16:56:40.525597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:23.639 [2024-07-25 16:56:43.687323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:23.639 Initializing NVMe Controllers 00:17:23.639 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:23.639 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:23.639 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:23.639 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:23.639 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:23.639 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:23.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:23.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:23.639 Initialization complete. Launching workers. 00:17:23.639 Starting thread on core 1 with urgent priority queue 00:17:23.639 Starting thread on core 2 with urgent priority queue 00:17:23.639 Starting thread on core 3 with urgent priority queue 00:17:23.639 Starting thread on core 0 with urgent priority queue 00:17:23.639 SPDK bdev Controller (SPDK2 ) core 0: 4930.00 IO/s 20.28 secs/100000 ios 00:17:23.639 SPDK bdev Controller (SPDK2 ) core 1: 4796.67 IO/s 20.85 secs/100000 ios 00:17:23.639 SPDK bdev Controller (SPDK2 ) core 2: 4101.00 IO/s 24.38 secs/100000 ios 00:17:23.639 SPDK bdev Controller (SPDK2 ) core 3: 3923.00 IO/s 25.49 secs/100000 ios 00:17:23.639 ======================================================== 00:17:23.639 00:17:23.640 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:23.640 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.900 [2024-07-25 16:56:43.946628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:23.900 Initializing NVMe Controllers 00:17:23.900 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:23.900 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:23.900 Namespace ID: 1 size: 0GB 00:17:23.900 Initialization complete. 00:17:23.900 INFO: using host memory buffer for IO 00:17:23.900 Hello world! 00:17:23.900 [2024-07-25 16:56:43.956686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:23.900 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:23.900 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.161 [2024-07-25 16:56:44.225460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.105 Initializing NVMe Controllers 00:17:25.105 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.105 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.105 Initialization complete. Launching workers. 00:17:25.105 submit (in ns) avg, min, max = 10267.6, 3910.8, 4006391.7 00:17:25.105 complete (in ns) avg, min, max = 15298.3, 2377.5, 4007630.8 00:17:25.105 00:17:25.105 Submit histogram 00:17:25.105 ================ 00:17:25.105 Range in us Cumulative Count 00:17:25.105 3.893 - 3.920: 0.2635% ( 51) 00:17:25.105 3.920 - 3.947: 3.4875% ( 624) 00:17:25.105 3.947 - 3.973: 12.8856% ( 1819) 00:17:25.105 3.973 - 4.000: 23.8801% ( 2128) 00:17:25.105 4.000 - 4.027: 33.8207% ( 1924) 00:17:25.105 4.027 - 4.053: 44.4743% ( 2062) 00:17:25.105 4.053 - 4.080: 58.9150% ( 2795) 00:17:25.105 4.080 - 4.107: 74.5957% ( 3035) 00:17:25.105 4.107 - 4.133: 87.7396% ( 2544) 00:17:25.105 4.133 - 4.160: 95.2622% ( 1456) 00:17:25.105 4.160 - 4.187: 98.1193% ( 553) 00:17:25.105 4.187 - 4.213: 99.1113% ( 192) 00:17:25.105 4.213 - 4.240: 99.3387% ( 44) 00:17:25.105 4.240 - 4.267: 99.3748% ( 7) 00:17:25.105 4.267 - 4.293: 99.3903% ( 3) 00:17:25.105 4.347 - 4.373: 99.4007% ( 2) 00:17:25.105 4.373 - 4.400: 99.4058% ( 1) 00:17:25.105 4.587 - 4.613: 99.4110% ( 1) 00:17:25.105 4.827 - 4.853: 99.4162% ( 1) 00:17:25.105 4.880 - 4.907: 99.4213% ( 1) 00:17:25.105 5.013 - 5.040: 99.4265% ( 1) 00:17:25.105 5.227 - 5.253: 99.4317% ( 1) 00:17:25.105 5.387 - 5.413: 99.4368% ( 1) 00:17:25.105 5.520 - 5.547: 99.4472% ( 2) 00:17:25.105 5.707 - 5.733: 99.4523% ( 1) 00:17:25.105 5.787 - 5.813: 99.4627% ( 2) 00:17:25.105 5.947 - 5.973: 99.4730% ( 2) 00:17:25.105 6.000 - 6.027: 99.4782% ( 1) 00:17:25.105 6.053 - 6.080: 99.4885% ( 2) 00:17:25.105 6.080 - 6.107: 99.4937% ( 1) 00:17:25.105 6.107 - 6.133: 99.4988% ( 1) 00:17:25.105 6.133 - 6.160: 99.5040% ( 1) 00:17:25.105 6.160 - 6.187: 99.5092% ( 1) 00:17:25.105 6.187 - 6.213: 99.5195% ( 2) 00:17:25.105 6.213 - 6.240: 99.5247% ( 1) 00:17:25.105 6.267 - 6.293: 99.5298% ( 1) 00:17:25.105 6.987 - 7.040: 99.5350% ( 1) 00:17:25.105 7.093 - 7.147: 99.5402% ( 1) 00:17:25.105 7.200 - 7.253: 99.5453% ( 1) 00:17:25.105 7.253 - 7.307: 99.5505% ( 1) 00:17:25.105 7.360 - 7.413: 99.5608% ( 2) 00:17:25.105 7.520 - 7.573: 99.5660% ( 1) 00:17:25.105 7.680 - 7.733: 99.5763% ( 2) 00:17:25.105 7.733 - 7.787: 99.5867% ( 2) 00:17:25.105 7.840 - 7.893: 99.6022% ( 3) 00:17:25.105 7.947 - 8.000: 99.6177% ( 3) 00:17:25.105 8.000 - 8.053: 99.6280% ( 2) 00:17:25.105 8.107 - 8.160: 99.6332% ( 1) 00:17:25.105 8.160 - 8.213: 99.6487% ( 3) 00:17:25.105 8.213 - 8.267: 99.6538% ( 1) 00:17:25.105 8.267 - 8.320: 99.6642% ( 2) 00:17:25.105 8.320 - 8.373: 99.6745% ( 2) 00:17:25.105 8.373 - 8.427: 99.6797% ( 1) 00:17:25.105 8.427 - 8.480: 99.6900% ( 2) 00:17:25.105 8.480 - 8.533: 99.7055% ( 3) 00:17:25.105 8.533 - 8.587: 99.7107% ( 1) 00:17:25.105 8.587 - 8.640: 99.7210% ( 2) 00:17:25.105 8.640 - 8.693: 99.7262% ( 1) 00:17:25.105 8.747 - 8.800: 99.7313% ( 1) 00:17:25.105 8.800 - 8.853: 99.7365% ( 1) 00:17:25.105 8.853 - 8.907: 99.7417% ( 1) 00:17:25.105 8.960 - 9.013: 99.7520% ( 2) 00:17:25.105 9.013 - 9.067: 99.7572% ( 1) 00:17:25.105 9.067 - 9.120: 99.7675% ( 2) 00:17:25.105 9.120 - 9.173: 99.7727% ( 1) 00:17:25.105 9.227 - 9.280: 99.7778% ( 1) 00:17:25.105 9.280 - 9.333: 99.7985% ( 4) 00:17:25.105 9.333 - 9.387: 99.8088% ( 2) 00:17:25.105 9.387 - 9.440: 99.8140% ( 1) 00:17:25.105 9.440 - 9.493: 99.8192% ( 1) 00:17:25.105 9.547 - 9.600: 99.8243% ( 1) 00:17:25.105 9.813 - 9.867: 99.8295% ( 1) 00:17:25.105 9.920 - 9.973: 99.8347% ( 1) 00:17:25.105 10.080 - 10.133: 99.8398% ( 1) 00:17:25.105 12.213 - 12.267: 99.8450% ( 1) 00:17:25.105 3986.773 - 4014.080: 100.0000% ( 30) 00:17:25.105 00:17:25.105 Complete histogram 00:17:25.105 ================== 00:17:25.105 Range in us Cumulative Count 00:17:25.105 2.373 - 2.387: 0.0103% ( 2) 00:17:25.105 2.387 - [2024-07-25 16:56:45.331929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.105 2.400: 0.5890% ( 112) 00:17:25.105 2.400 - 2.413: 1.0953% ( 98) 00:17:25.105 2.413 - 2.427: 1.2503% ( 30) 00:17:25.105 2.427 - 2.440: 3.8801% ( 509) 00:17:25.105 2.440 - 2.453: 46.8096% ( 8309) 00:17:25.105 2.453 - 2.467: 54.3064% ( 1451) 00:17:25.105 2.467 - 2.480: 73.5727% ( 3729) 00:17:25.105 2.480 - 2.493: 78.8375% ( 1019) 00:17:25.105 2.493 - 2.507: 81.6223% ( 539) 00:17:25.105 2.507 - 2.520: 85.9520% ( 838) 00:17:25.106 2.520 - 2.533: 91.7902% ( 1130) 00:17:25.106 2.533 - 2.547: 95.2829% ( 676) 00:17:25.106 2.547 - 2.560: 97.9127% ( 509) 00:17:25.106 2.560 - 2.573: 98.9718% ( 205) 00:17:25.106 2.573 - 2.587: 99.3438% ( 72) 00:17:25.106 2.587 - 2.600: 99.4368% ( 18) 00:17:25.106 2.600 - 2.613: 99.4420% ( 1) 00:17:25.106 5.520 - 5.547: 99.4472% ( 1) 00:17:25.106 5.547 - 5.573: 99.4523% ( 1) 00:17:25.106 5.840 - 5.867: 99.4575% ( 1) 00:17:25.106 6.080 - 6.107: 99.4730% ( 3) 00:17:25.106 6.160 - 6.187: 99.4782% ( 1) 00:17:25.106 6.187 - 6.213: 99.4833% ( 1) 00:17:25.106 6.320 - 6.347: 99.4885% ( 1) 00:17:25.106 6.347 - 6.373: 99.4988% ( 2) 00:17:25.106 6.400 - 6.427: 99.5040% ( 1) 00:17:25.106 6.480 - 6.507: 99.5143% ( 2) 00:17:25.106 6.507 - 6.533: 99.5195% ( 1) 00:17:25.106 6.587 - 6.613: 99.5350% ( 3) 00:17:25.106 6.640 - 6.667: 99.5453% ( 2) 00:17:25.106 6.667 - 6.693: 99.5505% ( 1) 00:17:25.106 6.693 - 6.720: 99.5557% ( 1) 00:17:25.106 6.773 - 6.800: 99.5608% ( 1) 00:17:25.106 6.880 - 6.933: 99.5660% ( 1) 00:17:25.106 6.987 - 7.040: 99.5712% ( 1) 00:17:25.106 7.040 - 7.093: 99.5763% ( 1) 00:17:25.106 7.093 - 7.147: 99.5815% ( 1) 00:17:25.106 7.147 - 7.200: 99.5867% ( 1) 00:17:25.106 7.360 - 7.413: 99.5970% ( 2) 00:17:25.106 7.413 - 7.467: 99.6022% ( 1) 00:17:25.106 7.573 - 7.627: 99.6073% ( 1) 00:17:25.106 7.627 - 7.680: 99.6125% ( 1) 00:17:25.106 7.840 - 7.893: 99.6177% ( 1) 00:17:25.106 7.893 - 7.947: 99.6228% ( 1) 00:17:25.106 8.053 - 8.107: 99.6280% ( 1) 00:17:25.106 8.160 - 8.213: 99.6332% ( 1) 00:17:25.106 8.267 - 8.320: 99.6383% ( 1) 00:17:25.106 8.960 - 9.013: 99.6435% ( 1) 00:17:25.106 14.293 - 14.400: 99.6487% ( 1) 00:17:25.106 14.507 - 14.613: 99.6538% ( 1) 00:17:25.106 15.147 - 15.253: 99.6590% ( 1) 00:17:25.106 36.267 - 36.480: 99.6642% ( 1) 00:17:25.106 43.093 - 43.307: 99.6693% ( 1) 00:17:25.106 93.013 - 93.440: 99.6745% ( 1) 00:17:25.106 156.160 - 157.013: 99.6797% ( 1) 00:17:25.106 3986.773 - 4014.080: 100.0000% ( 62) 00:17:25.106 00:17:25.106 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:25.106 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:25.106 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:25.106 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:25.106 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.367 [ 00:17:25.367 { 00:17:25.367 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.367 "subtype": "Discovery", 00:17:25.368 "listen_addresses": [], 00:17:25.368 "allow_any_host": true, 00:17:25.368 "hosts": [] 00:17:25.368 }, 00:17:25.368 { 00:17:25.368 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.368 "subtype": "NVMe", 00:17:25.368 "listen_addresses": [ 00:17:25.368 { 00:17:25.368 "trtype": "VFIOUSER", 00:17:25.368 "adrfam": "IPv4", 00:17:25.368 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.368 "trsvcid": "0" 00:17:25.368 } 00:17:25.368 ], 00:17:25.368 "allow_any_host": true, 00:17:25.368 "hosts": [], 00:17:25.368 "serial_number": "SPDK1", 00:17:25.368 "model_number": "SPDK bdev Controller", 00:17:25.368 "max_namespaces": 32, 00:17:25.368 "min_cntlid": 1, 00:17:25.368 "max_cntlid": 65519, 00:17:25.368 "namespaces": [ 00:17:25.368 { 00:17:25.368 "nsid": 1, 00:17:25.368 "bdev_name": "Malloc1", 00:17:25.368 "name": "Malloc1", 00:17:25.368 "nguid": "7CD6327F34E3445D879252C99BA37F9E", 00:17:25.368 "uuid": "7cd6327f-34e3-445d-8792-52c99ba37f9e" 00:17:25.368 }, 00:17:25.368 { 00:17:25.368 "nsid": 2, 00:17:25.368 "bdev_name": "Malloc3", 00:17:25.368 "name": "Malloc3", 00:17:25.368 "nguid": "0C186F71583B4A508B4376984BFAF401", 00:17:25.368 "uuid": "0c186f71-583b-4a50-8b43-76984bfaf401" 00:17:25.368 } 00:17:25.368 ] 00:17:25.368 }, 00:17:25.368 { 00:17:25.368 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.368 "subtype": "NVMe", 00:17:25.368 "listen_addresses": [ 00:17:25.368 { 00:17:25.368 "trtype": "VFIOUSER", 00:17:25.368 "adrfam": "IPv4", 00:17:25.368 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.368 "trsvcid": "0" 00:17:25.368 } 00:17:25.368 ], 00:17:25.368 "allow_any_host": true, 00:17:25.368 "hosts": [], 00:17:25.368 "serial_number": "SPDK2", 00:17:25.368 "model_number": "SPDK bdev Controller", 00:17:25.368 "max_namespaces": 32, 00:17:25.368 "min_cntlid": 1, 00:17:25.368 "max_cntlid": 65519, 00:17:25.368 "namespaces": [ 00:17:25.368 { 00:17:25.368 "nsid": 1, 00:17:25.368 "bdev_name": "Malloc2", 00:17:25.368 "name": "Malloc2", 00:17:25.368 "nguid": "347A07E015AD42C4BFD919AB34C1E4AC", 00:17:25.368 "uuid": "347a07e0-15ad-42c4-bfd9-19ab34c1e4ac" 00:17:25.368 } 00:17:25.368 ] 00:17:25.368 } 00:17:25.368 ] 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1409663 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:25.368 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:25.368 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.630 Malloc4 00:17:25.630 [2024-07-25 16:56:45.713584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.630 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:25.630 [2024-07-25 16:56:45.883744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.891 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.891 Asynchronous Event Request test 00:17:25.891 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.891 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:25.891 Registering asynchronous event callbacks... 00:17:25.891 Starting namespace attribute notice tests for all controllers... 00:17:25.891 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:25.891 aer_cb - Changed Namespace 00:17:25.891 Cleaning up... 00:17:25.891 [ 00:17:25.891 { 00:17:25.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.891 "subtype": "Discovery", 00:17:25.891 "listen_addresses": [], 00:17:25.891 "allow_any_host": true, 00:17:25.891 "hosts": [] 00:17:25.891 }, 00:17:25.891 { 00:17:25.891 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.891 "subtype": "NVMe", 00:17:25.891 "listen_addresses": [ 00:17:25.891 { 00:17:25.891 "trtype": "VFIOUSER", 00:17:25.891 "adrfam": "IPv4", 00:17:25.891 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.891 "trsvcid": "0" 00:17:25.891 } 00:17:25.891 ], 00:17:25.891 "allow_any_host": true, 00:17:25.891 "hosts": [], 00:17:25.891 "serial_number": "SPDK1", 00:17:25.891 "model_number": "SPDK bdev Controller", 00:17:25.891 "max_namespaces": 32, 00:17:25.891 "min_cntlid": 1, 00:17:25.891 "max_cntlid": 65519, 00:17:25.891 "namespaces": [ 00:17:25.891 { 00:17:25.891 "nsid": 1, 00:17:25.891 "bdev_name": "Malloc1", 00:17:25.891 "name": "Malloc1", 00:17:25.891 "nguid": "7CD6327F34E3445D879252C99BA37F9E", 00:17:25.891 "uuid": "7cd6327f-34e3-445d-8792-52c99ba37f9e" 00:17:25.891 }, 00:17:25.891 { 00:17:25.891 "nsid": 2, 00:17:25.891 "bdev_name": "Malloc3", 00:17:25.891 "name": "Malloc3", 00:17:25.891 "nguid": "0C186F71583B4A508B4376984BFAF401", 00:17:25.891 "uuid": "0c186f71-583b-4a50-8b43-76984bfaf401" 00:17:25.891 } 00:17:25.891 ] 00:17:25.891 }, 00:17:25.891 { 00:17:25.891 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.891 "subtype": "NVMe", 00:17:25.891 "listen_addresses": [ 00:17:25.891 { 00:17:25.891 "trtype": "VFIOUSER", 00:17:25.891 "adrfam": "IPv4", 00:17:25.891 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.891 "trsvcid": "0" 00:17:25.891 } 00:17:25.891 ], 00:17:25.891 "allow_any_host": true, 00:17:25.891 "hosts": [], 00:17:25.891 "serial_number": "SPDK2", 00:17:25.891 "model_number": "SPDK bdev Controller", 00:17:25.891 "max_namespaces": 32, 00:17:25.891 "min_cntlid": 1, 00:17:25.891 "max_cntlid": 65519, 00:17:25.891 "namespaces": [ 00:17:25.892 { 00:17:25.892 "nsid": 1, 00:17:25.892 "bdev_name": "Malloc2", 00:17:25.892 "name": "Malloc2", 00:17:25.892 "nguid": "347A07E015AD42C4BFD919AB34C1E4AC", 00:17:25.892 "uuid": "347a07e0-15ad-42c4-bfd9-19ab34c1e4ac" 00:17:25.892 }, 00:17:25.892 { 00:17:25.892 "nsid": 2, 00:17:25.892 "bdev_name": "Malloc4", 00:17:25.892 "name": "Malloc4", 00:17:25.892 "nguid": "635F0B8965C84F6781D273C2E26D7BBB", 00:17:25.892 "uuid": "635f0b89-65c8-4f67-81d2-73c2e26d7bbb" 00:17:25.892 } 00:17:25.892 ] 00:17:25.892 } 00:17:25.892 ] 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1409663 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1400587 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1400587 ']' 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1400587 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1400587 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1400587' 00:17:25.892 killing process with pid 1400587 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1400587 00:17:25.892 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1400587 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1409688 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1409688' 00:17:26.154 Process pid: 1409688 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1409688 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1409688 ']' 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.154 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:26.155 [2024-07-25 16:56:46.368734] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:26.155 [2024-07-25 16:56:46.369644] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:17:26.155 [2024-07-25 16:56:46.369681] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.155 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.416 [2024-07-25 16:56:46.433086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.417 [2024-07-25 16:56:46.498409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.417 [2024-07-25 16:56:46.498451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.417 [2024-07-25 16:56:46.498460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.417 [2024-07-25 16:56:46.498466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.417 [2024-07-25 16:56:46.498472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.417 [2024-07-25 16:56:46.498635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.417 [2024-07-25 16:56:46.498720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.417 [2024-07-25 16:56:46.498875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.417 [2024-07-25 16:56:46.498878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.417 [2024-07-25 16:56:46.565252] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:26.417 [2024-07-25 16:56:46.565710] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:26.417 [2024-07-25 16:56:46.566376] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:26.417 [2024-07-25 16:56:46.566607] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:26.417 [2024-07-25 16:56:46.566784] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:26.988 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.989 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:26.989 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:27.930 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:28.190 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:28.190 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:28.190 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.190 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:28.190 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:28.190 Malloc1 00:17:28.190 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:28.451 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:28.713 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:28.713 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.713 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:28.713 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:28.974 Malloc2 00:17:28.974 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:29.234 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:29.234 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1409688 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1409688 ']' 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1409688 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409688 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409688' 00:17:29.494 killing process with pid 1409688 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1409688 00:17:29.494 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1409688 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:29.756 00:17:29.756 real 0m50.602s 00:17:29.756 user 3m20.503s 00:17:29.756 sys 0m3.015s 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:29.756 ************************************ 00:17:29.756 END TEST nvmf_vfio_user 00:17:29.756 ************************************ 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.756 ************************************ 00:17:29.756 START TEST nvmf_vfio_user_nvme_compliance 00:17:29.756 ************************************ 00:17:29.756 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:30.018 * Looking for test storage... 00:17:30.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:30.018 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1410521 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1410521' 00:17:30.019 Process pid: 1410521 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1410521 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1410521 ']' 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.019 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:30.019 [2024-07-25 16:56:50.161397] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:17:30.019 [2024-07-25 16:56:50.161455] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.019 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.019 [2024-07-25 16:56:50.224667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.019 [2024-07-25 16:56:50.289916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.019 [2024-07-25 16:56:50.289955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.019 [2024-07-25 16:56:50.289963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.019 [2024-07-25 16:56:50.289969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.019 [2024-07-25 16:56:50.289975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.019 [2024-07-25 16:56:50.290103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.019 [2024-07-25 16:56:50.290197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.019 [2024-07-25 16:56:50.290206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.960 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.960 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:30.960 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.901 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.902 malloc0 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.902 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.902 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:31.902 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.902 00:17:31.902 00:17:31.902 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.902 http://cunit.sourceforge.net/ 00:17:31.902 00:17:31.902 00:17:31.902 Suite: nvme_compliance 00:17:32.163 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 16:56:52.193717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.163 [2024-07-25 16:56:52.195079] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:32.163 [2024-07-25 16:56:52.195090] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:32.163 [2024-07-25 16:56:52.195095] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:32.163 [2024-07-25 16:56:52.196736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.163 passed 00:17:32.163 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 16:56:52.289334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.163 [2024-07-25 16:56:52.292352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.163 passed 00:17:32.163 Test: admin_identify_ns ...[2024-07-25 16:56:52.388438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.424 [2024-07-25 16:56:52.452213] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:32.425 [2024-07-25 16:56:52.460211] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:32.425 [2024-07-25 16:56:52.481318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.425 passed 00:17:32.425 Test: admin_get_features_mandatory_features ...[2024-07-25 16:56:52.573002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.425 [2024-07-25 16:56:52.576015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.425 passed 00:17:32.425 Test: admin_get_features_optional_features ...[2024-07-25 16:56:52.669560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.425 [2024-07-25 16:56:52.672574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.685 passed 00:17:32.685 Test: admin_set_features_number_of_queues ...[2024-07-25 16:56:52.765745] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.685 [2024-07-25 16:56:52.871313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.685 passed 00:17:32.992 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 16:56:52.963944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.993 [2024-07-25 16:56:52.966964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.993 passed 00:17:32.993 Test: admin_get_log_page_with_lpo ...[2024-07-25 16:56:53.059448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.993 [2024-07-25 16:56:53.127210] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:32.993 [2024-07-25 16:56:53.140300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.993 passed 00:17:32.993 Test: fabric_property_get ...[2024-07-25 16:56:53.232330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.993 [2024-07-25 16:56:53.233578] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:32.993 [2024-07-25 16:56:53.235354] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.282 passed 00:17:33.282 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 16:56:53.330916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.282 [2024-07-25 16:56:53.332176] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:33.282 [2024-07-25 16:56:53.333946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.282 passed 00:17:33.282 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 16:56:53.426467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.282 [2024-07-25 16:56:53.510209] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:33.282 [2024-07-25 16:56:53.526217] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:33.282 [2024-07-25 16:56:53.531300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.543 passed 00:17:33.543 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 16:56:53.625297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.543 [2024-07-25 16:56:53.626539] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:33.543 [2024-07-25 16:56:53.628315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.543 passed 00:17:33.543 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 16:56:53.721469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.543 [2024-07-25 16:56:53.797212] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:33.803 [2024-07-25 16:56:53.821210] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:33.803 [2024-07-25 16:56:53.826301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.803 passed 00:17:33.803 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 16:56:53.921383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.803 [2024-07-25 16:56:53.922622] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:33.803 [2024-07-25 16:56:53.922643] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:33.803 [2024-07-25 16:56:53.924398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.803 passed 00:17:33.803 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 16:56:54.017543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.064 [2024-07-25 16:56:54.109209] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:34.064 [2024-07-25 16:56:54.117212] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:34.064 [2024-07-25 16:56:54.125207] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:34.064 [2024-07-25 16:56:54.133204] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:34.064 [2024-07-25 16:56:54.162290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.064 passed 00:17:34.064 Test: admin_create_io_sq_verify_pc ...[2024-07-25 16:56:54.255275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.064 [2024-07-25 16:56:54.274217] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:34.064 [2024-07-25 16:56:54.291444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:34.064 passed 00:17:34.325 Test: admin_create_io_qp_max_qps ...[2024-07-25 16:56:54.384965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.268 [2024-07-25 16:56:55.495211] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:35.855 [2024-07-25 16:56:55.882443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.855 passed 00:17:35.855 Test: admin_create_io_sq_shared_cq ...[2024-07-25 16:56:55.975561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.855 [2024-07-25 16:56:56.107208] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:36.116 [2024-07-25 16:56:56.144294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:36.116 passed 00:17:36.116 00:17:36.116 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.116 suites 1 1 n/a 0 0 00:17:36.116 tests 18 18 18 0 0 00:17:36.116 asserts 360 360 360 0 n/a 00:17:36.116 00:17:36.116 Elapsed time = 1.655 seconds 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1410521 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1410521 ']' 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1410521 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410521 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410521' 00:17:36.116 killing process with pid 1410521 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1410521 00:17:36.116 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1410521 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:36.378 00:17:36.378 real 0m6.431s 00:17:36.378 user 0m18.392s 00:17:36.378 sys 0m0.454s 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 ************************************ 00:17:36.378 END TEST nvmf_vfio_user_nvme_compliance 00:17:36.378 ************************************ 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 ************************************ 00:17:36.378 START TEST nvmf_vfio_user_fuzz 00:17:36.378 ************************************ 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:36.378 * Looking for test storage... 00:17:36.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:36.378 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1411828 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1411828' 00:17:36.379 Process pid: 1411828 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1411828 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1411828 ']' 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.379 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.322 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:37.322 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.264 malloc0 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:38.264 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:10.414 Fuzzing completed. Shutting down the fuzz application 00:18:10.414 00:18:10.414 Dumping successful admin opcodes: 00:18:10.414 8, 9, 10, 24, 00:18:10.414 Dumping successful io opcodes: 00:18:10.414 0, 00:18:10.414 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1132291, total successful commands: 4458, random_seed: 3035755776 00:18:10.414 NS: 0x200003a1ef00 admin qp, Total commands completed: 142498, total successful commands: 1157, random_seed: 2411337088 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1411828 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1411828 ']' 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1411828 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1411828 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1411828' 00:18:10.414 killing process with pid 1411828 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1411828 00:18:10.414 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1411828 00:18:10.414 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:10.414 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:10.414 00:18:10.414 real 0m33.700s 00:18:10.414 user 0m37.880s 00:18:10.414 sys 0m25.801s 00:18:10.414 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.414 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:10.414 ************************************ 00:18:10.414 END TEST nvmf_vfio_user_fuzz 00:18:10.414 ************************************ 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.415 ************************************ 00:18:10.415 START TEST nvmf_auth_target 00:18:10.415 ************************************ 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:10.415 * Looking for test storage... 00:18:10.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:10.415 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:17.011 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:17.011 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:17.011 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.011 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:17.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.012 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.745 ms 00:18:17.012 00:18:17.012 --- 10.0.0.2 ping statistics --- 00:18:17.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.012 rtt min/avg/max/mdev = 0.745/0.745/0.745/0.000 ms 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:18:17.012 00:18:17.012 --- 10.0.0.1 ping statistics --- 00:18:17.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.012 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1422636 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1422636 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1422636 ']' 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.012 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.957 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.957 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:17.957 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.957 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.957 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1422719 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6c323cd2fbeb9eaa919b9762739be8c8be849a58cba98b1a 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cUt 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6c323cd2fbeb9eaa919b9762739be8c8be849a58cba98b1a 0 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6c323cd2fbeb9eaa919b9762739be8c8be849a58cba98b1a 0 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6c323cd2fbeb9eaa919b9762739be8c8be849a58cba98b1a 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cUt 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cUt 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.cUt 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d9a72a845b0aa019bf3fc029e75985aa01d5286381550243fb5371b40997335b 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VS7 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d9a72a845b0aa019bf3fc029e75985aa01d5286381550243fb5371b40997335b 3 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d9a72a845b0aa019bf3fc029e75985aa01d5286381550243fb5371b40997335b 3 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d9a72a845b0aa019bf3fc029e75985aa01d5286381550243fb5371b40997335b 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VS7 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VS7 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.VS7 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:17.957 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=acbabd2d885a552544939a1e512f4811 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.l1W 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key acbabd2d885a552544939a1e512f4811 1 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 acbabd2d885a552544939a1e512f4811 1 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=acbabd2d885a552544939a1e512f4811 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.l1W 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.l1W 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.l1W 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cc722fcc168cba105af751ffb3897de67fb20ec85bc3ba06 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1IJ 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cc722fcc168cba105af751ffb3897de67fb20ec85bc3ba06 2 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cc722fcc168cba105af751ffb3897de67fb20ec85bc3ba06 2 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cc722fcc168cba105af751ffb3897de67fb20ec85bc3ba06 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:17.958 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1IJ 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1IJ 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.1IJ 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7c9c278bc913c6067b22832ad362414cf42c320fd6ff5450 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pii 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7c9c278bc913c6067b22832ad362414cf42c320fd6ff5450 2 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7c9c278bc913c6067b22832ad362414cf42c320fd6ff5450 2 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7c9c278bc913c6067b22832ad362414cf42c320fd6ff5450 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pii 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pii 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.pii 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8db4d6496367a25c7f3d239a7e15ce12 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.udo 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8db4d6496367a25c7f3d239a7e15ce12 1 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8db4d6496367a25c7f3d239a7e15ce12 1 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8db4d6496367a25c7f3d239a7e15ce12 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.udo 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.udo 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.udo 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:18.220 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=20525e5fea336a941ff160ff219a78f8e93af21b5b967450e185298402b6fb1e 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dX3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 20525e5fea336a941ff160ff219a78f8e93af21b5b967450e185298402b6fb1e 3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 20525e5fea336a941ff160ff219a78f8e93af21b5b967450e185298402b6fb1e 3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=20525e5fea336a941ff160ff219a78f8e93af21b5b967450e185298402b6fb1e 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dX3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dX3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.dX3 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1422636 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1422636 ']' 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.221 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1422719 /var/tmp/host.sock 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1422719 ']' 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:18.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.525 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cUt 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cUt 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cUt 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.VS7 ]] 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VS7 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VS7 00:18:18.807 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VS7 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.l1W 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.l1W 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.l1W 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.1IJ ]] 00:18:19.068 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1IJ 00:18:19.069 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.069 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.069 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.069 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1IJ 00:18:19.069 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1IJ 00:18:19.330 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:19.330 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pii 00:18:19.330 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.330 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.330 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.331 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pii 00:18:19.331 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pii 00:18:19.591 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.udo ]] 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.udo 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.udo 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.udo 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dX3 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dX3 00:18:19.592 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dX3 00:18:19.854 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:19.854 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:19.854 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.854 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.854 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:19.854 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.115 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.116 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.116 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.116 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.116 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.377 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.377 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.377 { 00:18:20.377 "cntlid": 1, 00:18:20.377 "qid": 0, 00:18:20.377 "state": "enabled", 00:18:20.377 "thread": "nvmf_tgt_poll_group_000", 00:18:20.378 "listen_address": { 00:18:20.378 "trtype": "TCP", 00:18:20.378 "adrfam": "IPv4", 00:18:20.378 "traddr": "10.0.0.2", 00:18:20.378 "trsvcid": "4420" 00:18:20.378 }, 00:18:20.378 "peer_address": { 00:18:20.378 "trtype": "TCP", 00:18:20.378 "adrfam": "IPv4", 00:18:20.378 "traddr": "10.0.0.1", 00:18:20.378 "trsvcid": "41814" 00:18:20.378 }, 00:18:20.378 "auth": { 00:18:20.378 "state": "completed", 00:18:20.378 "digest": "sha256", 00:18:20.378 "dhgroup": "null" 00:18:20.378 } 00:18:20.378 } 00:18:20.378 ]' 00:18:20.378 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.378 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.378 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.639 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.639 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.639 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.639 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.639 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.639 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.584 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.845 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.845 00:18:21.845 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.845 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.845 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.107 { 00:18:22.107 "cntlid": 3, 00:18:22.107 "qid": 0, 00:18:22.107 "state": "enabled", 00:18:22.107 "thread": "nvmf_tgt_poll_group_000", 00:18:22.107 "listen_address": { 00:18:22.107 "trtype": "TCP", 00:18:22.107 "adrfam": "IPv4", 00:18:22.107 "traddr": "10.0.0.2", 00:18:22.107 "trsvcid": "4420" 00:18:22.107 }, 00:18:22.107 "peer_address": { 00:18:22.107 "trtype": "TCP", 00:18:22.107 "adrfam": "IPv4", 00:18:22.107 "traddr": "10.0.0.1", 00:18:22.107 "trsvcid": "41836" 00:18:22.107 }, 00:18:22.107 "auth": { 00:18:22.107 "state": "completed", 00:18:22.107 "digest": "sha256", 00:18:22.107 "dhgroup": "null" 00:18:22.107 } 00:18:22.107 } 00:18:22.107 ]' 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.107 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.368 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.368 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.368 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.312 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.573 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.573 { 00:18:23.573 "cntlid": 5, 00:18:23.573 "qid": 0, 00:18:23.573 "state": "enabled", 00:18:23.573 "thread": "nvmf_tgt_poll_group_000", 00:18:23.573 "listen_address": { 00:18:23.573 "trtype": "TCP", 00:18:23.573 "adrfam": "IPv4", 00:18:23.573 "traddr": "10.0.0.2", 00:18:23.573 "trsvcid": "4420" 00:18:23.573 }, 00:18:23.573 "peer_address": { 00:18:23.573 "trtype": "TCP", 00:18:23.573 "adrfam": "IPv4", 00:18:23.573 "traddr": "10.0.0.1", 00:18:23.573 "trsvcid": "41852" 00:18:23.573 }, 00:18:23.573 "auth": { 00:18:23.573 "state": "completed", 00:18:23.573 "digest": "sha256", 00:18:23.573 "dhgroup": "null" 00:18:23.573 } 00:18:23.573 } 00:18:23.573 ]' 00:18:23.573 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.835 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.835 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.776 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.777 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.038 00:18:25.038 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.038 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.038 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.299 { 00:18:25.299 "cntlid": 7, 00:18:25.299 "qid": 0, 00:18:25.299 "state": "enabled", 00:18:25.299 "thread": "nvmf_tgt_poll_group_000", 00:18:25.299 "listen_address": { 00:18:25.299 "trtype": "TCP", 00:18:25.299 "adrfam": "IPv4", 00:18:25.299 "traddr": "10.0.0.2", 00:18:25.299 "trsvcid": "4420" 00:18:25.299 }, 00:18:25.299 "peer_address": { 00:18:25.299 "trtype": "TCP", 00:18:25.299 "adrfam": "IPv4", 00:18:25.299 "traddr": "10.0.0.1", 00:18:25.299 "trsvcid": "53924" 00:18:25.299 }, 00:18:25.299 "auth": { 00:18:25.299 "state": "completed", 00:18:25.299 "digest": "sha256", 00:18:25.299 "dhgroup": "null" 00:18:25.299 } 00:18:25.299 } 00:18:25.299 ]' 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:25.299 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.561 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.561 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.561 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.561 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.504 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.505 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.766 00:18:26.766 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.766 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.766 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.766 { 00:18:26.766 "cntlid": 9, 00:18:26.766 "qid": 0, 00:18:26.766 "state": "enabled", 00:18:26.766 "thread": "nvmf_tgt_poll_group_000", 00:18:26.766 "listen_address": { 00:18:26.766 "trtype": "TCP", 00:18:26.766 "adrfam": "IPv4", 00:18:26.766 "traddr": "10.0.0.2", 00:18:26.766 "trsvcid": "4420" 00:18:26.766 }, 00:18:26.766 "peer_address": { 00:18:26.766 "trtype": "TCP", 00:18:26.766 "adrfam": "IPv4", 00:18:26.766 "traddr": "10.0.0.1", 00:18:26.766 "trsvcid": "53960" 00:18:26.766 }, 00:18:26.766 "auth": { 00:18:26.766 "state": "completed", 00:18:26.766 "digest": "sha256", 00:18:26.766 "dhgroup": "ffdhe2048" 00:18:26.766 } 00:18:26.766 } 00:18:26.766 ]' 00:18:26.766 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.028 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.289 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.861 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.123 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.385 00:18:28.385 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.385 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.385 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.647 { 00:18:28.647 "cntlid": 11, 00:18:28.647 "qid": 0, 00:18:28.647 "state": "enabled", 00:18:28.647 "thread": "nvmf_tgt_poll_group_000", 00:18:28.647 "listen_address": { 00:18:28.647 "trtype": "TCP", 00:18:28.647 "adrfam": "IPv4", 00:18:28.647 "traddr": "10.0.0.2", 00:18:28.647 "trsvcid": "4420" 00:18:28.647 }, 00:18:28.647 "peer_address": { 00:18:28.647 "trtype": "TCP", 00:18:28.647 "adrfam": "IPv4", 00:18:28.647 "traddr": "10.0.0.1", 00:18:28.647 "trsvcid": "53998" 00:18:28.647 }, 00:18:28.647 "auth": { 00:18:28.647 "state": "completed", 00:18:28.647 "digest": "sha256", 00:18:28.647 "dhgroup": "ffdhe2048" 00:18:28.647 } 00:18:28.647 } 00:18:28.647 ]' 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.647 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.908 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.852 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.113 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.113 { 00:18:30.113 "cntlid": 13, 00:18:30.113 "qid": 0, 00:18:30.113 "state": "enabled", 00:18:30.113 "thread": "nvmf_tgt_poll_group_000", 00:18:30.113 "listen_address": { 00:18:30.113 "trtype": "TCP", 00:18:30.113 "adrfam": "IPv4", 00:18:30.113 "traddr": "10.0.0.2", 00:18:30.113 "trsvcid": "4420" 00:18:30.113 }, 00:18:30.113 "peer_address": { 00:18:30.113 "trtype": "TCP", 00:18:30.113 "adrfam": "IPv4", 00:18:30.113 "traddr": "10.0.0.1", 00:18:30.113 "trsvcid": "54026" 00:18:30.113 }, 00:18:30.113 "auth": { 00:18:30.113 "state": "completed", 00:18:30.113 "digest": "sha256", 00:18:30.113 "dhgroup": "ffdhe2048" 00:18:30.113 } 00:18:30.113 } 00:18:30.113 ]' 00:18:30.113 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.375 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.636 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.209 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.470 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.732 00:18:31.732 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.732 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.732 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.993 { 00:18:31.993 "cntlid": 15, 00:18:31.993 "qid": 0, 00:18:31.993 "state": "enabled", 00:18:31.993 "thread": "nvmf_tgt_poll_group_000", 00:18:31.993 "listen_address": { 00:18:31.993 "trtype": "TCP", 00:18:31.993 "adrfam": "IPv4", 00:18:31.993 "traddr": "10.0.0.2", 00:18:31.993 "trsvcid": "4420" 00:18:31.993 }, 00:18:31.993 "peer_address": { 00:18:31.993 "trtype": "TCP", 00:18:31.993 "adrfam": "IPv4", 00:18:31.993 "traddr": "10.0.0.1", 00:18:31.993 "trsvcid": "54068" 00:18:31.993 }, 00:18:31.993 "auth": { 00:18:31.993 "state": "completed", 00:18:31.993 "digest": "sha256", 00:18:31.993 "dhgroup": "ffdhe2048" 00:18:31.993 } 00:18:31.993 } 00:18:31.993 ]' 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.993 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.255 16:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:18:32.826 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.826 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.826 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.826 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.087 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.088 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.361 00:18:33.361 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.361 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.361 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.669 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.669 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.669 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.669 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.669 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.669 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.669 { 00:18:33.669 "cntlid": 17, 00:18:33.669 "qid": 0, 00:18:33.669 "state": "enabled", 00:18:33.669 "thread": "nvmf_tgt_poll_group_000", 00:18:33.669 "listen_address": { 00:18:33.669 "trtype": "TCP", 00:18:33.669 "adrfam": "IPv4", 00:18:33.670 "traddr": "10.0.0.2", 00:18:33.670 "trsvcid": "4420" 00:18:33.670 }, 00:18:33.670 "peer_address": { 00:18:33.670 "trtype": "TCP", 00:18:33.670 "adrfam": "IPv4", 00:18:33.670 "traddr": "10.0.0.1", 00:18:33.670 "trsvcid": "54100" 00:18:33.670 }, 00:18:33.670 "auth": { 00:18:33.670 "state": "completed", 00:18:33.670 "digest": "sha256", 00:18:33.670 "dhgroup": "ffdhe3072" 00:18:33.670 } 00:18:33.670 } 00:18:33.670 ]' 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.670 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.931 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:18:34.502 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.772 16:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.038 00:18:35.038 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.038 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.038 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.299 { 00:18:35.299 "cntlid": 19, 00:18:35.299 "qid": 0, 00:18:35.299 "state": "enabled", 00:18:35.299 "thread": "nvmf_tgt_poll_group_000", 00:18:35.299 "listen_address": { 00:18:35.299 "trtype": "TCP", 00:18:35.299 "adrfam": "IPv4", 00:18:35.299 "traddr": "10.0.0.2", 00:18:35.299 "trsvcid": "4420" 00:18:35.299 }, 00:18:35.299 "peer_address": { 00:18:35.299 "trtype": "TCP", 00:18:35.299 "adrfam": "IPv4", 00:18:35.299 "traddr": "10.0.0.1", 00:18:35.299 "trsvcid": "38898" 00:18:35.299 }, 00:18:35.299 "auth": { 00:18:35.299 "state": "completed", 00:18:35.299 "digest": "sha256", 00:18:35.299 "dhgroup": "ffdhe3072" 00:18:35.299 } 00:18:35.299 } 00:18:35.299 ]' 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.299 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.561 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.505 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.766 00:18:36.766 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.766 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.766 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.028 { 00:18:37.028 "cntlid": 21, 00:18:37.028 "qid": 0, 00:18:37.028 "state": "enabled", 00:18:37.028 "thread": "nvmf_tgt_poll_group_000", 00:18:37.028 "listen_address": { 00:18:37.028 "trtype": "TCP", 00:18:37.028 "adrfam": "IPv4", 00:18:37.028 "traddr": "10.0.0.2", 00:18:37.028 "trsvcid": "4420" 00:18:37.028 }, 00:18:37.028 "peer_address": { 00:18:37.028 "trtype": "TCP", 00:18:37.028 "adrfam": "IPv4", 00:18:37.028 "traddr": "10.0.0.1", 00:18:37.028 "trsvcid": "38918" 00:18:37.028 }, 00:18:37.028 "auth": { 00:18:37.028 "state": "completed", 00:18:37.028 "digest": "sha256", 00:18:37.028 "dhgroup": "ffdhe3072" 00:18:37.028 } 00:18:37.028 } 00:18:37.028 ]' 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.028 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.301 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.245 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.506 00:18:38.506 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.506 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.506 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.767 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.768 { 00:18:38.768 "cntlid": 23, 00:18:38.768 "qid": 0, 00:18:38.768 "state": "enabled", 00:18:38.768 "thread": "nvmf_tgt_poll_group_000", 00:18:38.768 "listen_address": { 00:18:38.768 "trtype": "TCP", 00:18:38.768 "adrfam": "IPv4", 00:18:38.768 "traddr": "10.0.0.2", 00:18:38.768 "trsvcid": "4420" 00:18:38.768 }, 00:18:38.768 "peer_address": { 00:18:38.768 "trtype": "TCP", 00:18:38.768 "adrfam": "IPv4", 00:18:38.768 "traddr": "10.0.0.1", 00:18:38.768 "trsvcid": "38950" 00:18:38.768 }, 00:18:38.768 "auth": { 00:18:38.768 "state": "completed", 00:18:38.768 "digest": "sha256", 00:18:38.768 "dhgroup": "ffdhe3072" 00:18:38.768 } 00:18:38.768 } 00:18:38.768 ]' 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.768 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.029 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:39.602 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.863 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.124 00:18:40.124 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.124 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.124 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.385 { 00:18:40.385 "cntlid": 25, 00:18:40.385 "qid": 0, 00:18:40.385 "state": "enabled", 00:18:40.385 "thread": "nvmf_tgt_poll_group_000", 00:18:40.385 "listen_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.2", 00:18:40.385 "trsvcid": "4420" 00:18:40.385 }, 00:18:40.385 "peer_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.1", 00:18:40.385 "trsvcid": "38976" 00:18:40.385 }, 00:18:40.385 "auth": { 00:18:40.385 "state": "completed", 00:18:40.385 "digest": "sha256", 00:18:40.385 "dhgroup": "ffdhe4096" 00:18:40.385 } 00:18:40.385 } 00:18:40.385 ]' 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.385 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.647 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.590 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.851 00:18:41.852 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.852 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.852 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.113 { 00:18:42.113 "cntlid": 27, 00:18:42.113 "qid": 0, 00:18:42.113 "state": "enabled", 00:18:42.113 "thread": "nvmf_tgt_poll_group_000", 00:18:42.113 "listen_address": { 00:18:42.113 "trtype": "TCP", 00:18:42.113 "adrfam": "IPv4", 00:18:42.113 "traddr": "10.0.0.2", 00:18:42.113 "trsvcid": "4420" 00:18:42.113 }, 00:18:42.113 "peer_address": { 00:18:42.113 "trtype": "TCP", 00:18:42.113 "adrfam": "IPv4", 00:18:42.113 "traddr": "10.0.0.1", 00:18:42.113 "trsvcid": "39006" 00:18:42.113 }, 00:18:42.113 "auth": { 00:18:42.113 "state": "completed", 00:18:42.113 "digest": "sha256", 00:18:42.113 "dhgroup": "ffdhe4096" 00:18:42.113 } 00:18:42.113 } 00:18:42.113 ]' 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.113 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.375 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.321 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.581 00:18:43.581 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.581 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.581 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.842 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.842 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.842 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.842 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.842 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.843 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.843 { 00:18:43.843 "cntlid": 29, 00:18:43.843 "qid": 0, 00:18:43.843 "state": "enabled", 00:18:43.843 "thread": "nvmf_tgt_poll_group_000", 00:18:43.843 "listen_address": { 00:18:43.843 "trtype": "TCP", 00:18:43.843 "adrfam": "IPv4", 00:18:43.843 "traddr": "10.0.0.2", 00:18:43.843 "trsvcid": "4420" 00:18:43.843 }, 00:18:43.843 "peer_address": { 00:18:43.843 "trtype": "TCP", 00:18:43.843 "adrfam": "IPv4", 00:18:43.843 "traddr": "10.0.0.1", 00:18:43.843 "trsvcid": "39018" 00:18:43.843 }, 00:18:43.843 "auth": { 00:18:43.843 "state": "completed", 00:18:43.843 "digest": "sha256", 00:18:43.843 "dhgroup": "ffdhe4096" 00:18:43.843 } 00:18:43.843 } 00:18:43.843 ]' 00:18:43.843 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.843 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.843 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.843 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.843 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.843 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.843 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.843 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.104 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:18:44.677 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.938 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.938 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.200 00:18:45.200 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.200 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.200 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.461 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.461 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.461 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.461 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.461 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.461 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.461 { 00:18:45.461 "cntlid": 31, 00:18:45.461 "qid": 0, 00:18:45.461 "state": "enabled", 00:18:45.461 "thread": "nvmf_tgt_poll_group_000", 00:18:45.461 "listen_address": { 00:18:45.461 "trtype": "TCP", 00:18:45.462 "adrfam": "IPv4", 00:18:45.462 "traddr": "10.0.0.2", 00:18:45.462 "trsvcid": "4420" 00:18:45.462 }, 00:18:45.462 "peer_address": { 00:18:45.462 "trtype": "TCP", 00:18:45.462 "adrfam": "IPv4", 00:18:45.462 "traddr": "10.0.0.1", 00:18:45.462 "trsvcid": "53308" 00:18:45.462 }, 00:18:45.462 "auth": { 00:18:45.462 "state": "completed", 00:18:45.462 "digest": "sha256", 00:18:45.462 "dhgroup": "ffdhe4096" 00:18:45.462 } 00:18:45.462 } 00:18:45.462 ]' 00:18:45.462 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.462 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.462 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.462 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.462 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.723 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.723 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.723 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.723 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.668 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.930 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.191 { 00:18:47.191 "cntlid": 33, 00:18:47.191 "qid": 0, 00:18:47.191 "state": "enabled", 00:18:47.191 "thread": "nvmf_tgt_poll_group_000", 00:18:47.191 "listen_address": { 00:18:47.191 "trtype": "TCP", 00:18:47.191 "adrfam": "IPv4", 00:18:47.191 "traddr": "10.0.0.2", 00:18:47.191 "trsvcid": "4420" 00:18:47.191 }, 00:18:47.191 "peer_address": { 00:18:47.191 "trtype": "TCP", 00:18:47.191 "adrfam": "IPv4", 00:18:47.191 "traddr": "10.0.0.1", 00:18:47.191 "trsvcid": "53316" 00:18:47.191 }, 00:18:47.191 "auth": { 00:18:47.191 "state": "completed", 00:18:47.191 "digest": "sha256", 00:18:47.191 "dhgroup": "ffdhe6144" 00:18:47.191 } 00:18:47.191 } 00:18:47.191 ]' 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.191 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.453 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.453 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.453 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.453 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.453 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.453 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:18:48.433 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.434 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.717 00:18:48.979 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.979 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.979 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.979 { 00:18:48.979 "cntlid": 35, 00:18:48.979 "qid": 0, 00:18:48.979 "state": "enabled", 00:18:48.979 "thread": "nvmf_tgt_poll_group_000", 00:18:48.979 "listen_address": { 00:18:48.979 "trtype": "TCP", 00:18:48.979 "adrfam": "IPv4", 00:18:48.979 "traddr": "10.0.0.2", 00:18:48.979 "trsvcid": "4420" 00:18:48.979 }, 00:18:48.979 "peer_address": { 00:18:48.979 "trtype": "TCP", 00:18:48.979 "adrfam": "IPv4", 00:18:48.979 "traddr": "10.0.0.1", 00:18:48.979 "trsvcid": "53336" 00:18:48.979 }, 00:18:48.979 "auth": { 00:18:48.979 "state": "completed", 00:18:48.979 "digest": "sha256", 00:18:48.979 "dhgroup": "ffdhe6144" 00:18:48.979 } 00:18:48.979 } 00:18:48.979 ]' 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.979 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.241 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.241 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.241 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.241 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.185 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.758 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.758 { 00:18:50.758 "cntlid": 37, 00:18:50.758 "qid": 0, 00:18:50.758 "state": "enabled", 00:18:50.758 "thread": "nvmf_tgt_poll_group_000", 00:18:50.758 "listen_address": { 00:18:50.758 "trtype": "TCP", 00:18:50.758 "adrfam": "IPv4", 00:18:50.758 "traddr": "10.0.0.2", 00:18:50.758 "trsvcid": "4420" 00:18:50.758 }, 00:18:50.758 "peer_address": { 00:18:50.758 "trtype": "TCP", 00:18:50.758 "adrfam": "IPv4", 00:18:50.758 "traddr": "10.0.0.1", 00:18:50.758 "trsvcid": "53372" 00:18:50.758 }, 00:18:50.758 "auth": { 00:18:50.758 "state": "completed", 00:18:50.758 "digest": "sha256", 00:18:50.758 "dhgroup": "ffdhe6144" 00:18:50.758 } 00:18:50.758 } 00:18:50.758 ]' 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.758 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.758 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.758 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.020 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.020 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.020 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.020 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:18:51.965 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.965 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.965 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.965 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.965 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.538 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.538 { 00:18:52.538 "cntlid": 39, 00:18:52.538 "qid": 0, 00:18:52.538 "state": "enabled", 00:18:52.538 "thread": "nvmf_tgt_poll_group_000", 00:18:52.538 "listen_address": { 00:18:52.538 "trtype": "TCP", 00:18:52.538 "adrfam": "IPv4", 00:18:52.538 "traddr": "10.0.0.2", 00:18:52.538 "trsvcid": "4420" 00:18:52.538 }, 00:18:52.538 "peer_address": { 00:18:52.538 "trtype": "TCP", 00:18:52.538 "adrfam": "IPv4", 00:18:52.538 "traddr": "10.0.0.1", 00:18:52.538 "trsvcid": "53396" 00:18:52.538 }, 00:18:52.538 "auth": { 00:18:52.538 "state": "completed", 00:18:52.538 "digest": "sha256", 00:18:52.538 "dhgroup": "ffdhe6144" 00:18:52.538 } 00:18:52.538 } 00:18:52.538 ]' 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.538 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.800 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.800 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.800 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.800 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.744 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.317 00:18:54.317 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.317 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.317 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.578 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.578 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.578 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.578 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.578 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.578 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.578 { 00:18:54.578 "cntlid": 41, 00:18:54.578 "qid": 0, 00:18:54.578 "state": "enabled", 00:18:54.578 "thread": "nvmf_tgt_poll_group_000", 00:18:54.578 "listen_address": { 00:18:54.578 "trtype": "TCP", 00:18:54.578 "adrfam": "IPv4", 00:18:54.578 "traddr": "10.0.0.2", 00:18:54.578 "trsvcid": "4420" 00:18:54.578 }, 00:18:54.578 "peer_address": { 00:18:54.578 "trtype": "TCP", 00:18:54.578 "adrfam": "IPv4", 00:18:54.578 "traddr": "10.0.0.1", 00:18:54.578 "trsvcid": "52682" 00:18:54.578 }, 00:18:54.578 "auth": { 00:18:54.578 "state": "completed", 00:18:54.579 "digest": "sha256", 00:18:54.579 "dhgroup": "ffdhe8192" 00:18:54.579 } 00:18:54.579 } 00:18:54.579 ]' 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.579 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.840 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:18:55.413 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.675 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.247 00:18:56.247 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.247 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.247 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.509 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.509 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.509 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.509 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.509 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.509 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.509 { 00:18:56.509 "cntlid": 43, 00:18:56.509 "qid": 0, 00:18:56.509 "state": "enabled", 00:18:56.509 "thread": "nvmf_tgt_poll_group_000", 00:18:56.509 "listen_address": { 00:18:56.509 "trtype": "TCP", 00:18:56.510 "adrfam": "IPv4", 00:18:56.510 "traddr": "10.0.0.2", 00:18:56.510 "trsvcid": "4420" 00:18:56.510 }, 00:18:56.510 "peer_address": { 00:18:56.510 "trtype": "TCP", 00:18:56.510 "adrfam": "IPv4", 00:18:56.510 "traddr": "10.0.0.1", 00:18:56.510 "trsvcid": "52718" 00:18:56.510 }, 00:18:56.510 "auth": { 00:18:56.510 "state": "completed", 00:18:56.510 "digest": "sha256", 00:18:56.510 "dhgroup": "ffdhe8192" 00:18:56.510 } 00:18:56.510 } 00:18:56.510 ]' 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.510 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.771 16:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:18:57.342 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.342 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.342 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.342 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.603 16:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.174 00:18:58.174 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.174 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.174 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.441 { 00:18:58.441 "cntlid": 45, 00:18:58.441 "qid": 0, 00:18:58.441 "state": "enabled", 00:18:58.441 "thread": "nvmf_tgt_poll_group_000", 00:18:58.441 "listen_address": { 00:18:58.441 "trtype": "TCP", 00:18:58.441 "adrfam": "IPv4", 00:18:58.441 "traddr": "10.0.0.2", 00:18:58.441 "trsvcid": "4420" 00:18:58.441 }, 00:18:58.441 "peer_address": { 00:18:58.441 "trtype": "TCP", 00:18:58.441 "adrfam": "IPv4", 00:18:58.441 "traddr": "10.0.0.1", 00:18:58.441 "trsvcid": "52750" 00:18:58.441 }, 00:18:58.441 "auth": { 00:18:58.441 "state": "completed", 00:18:58.441 "digest": "sha256", 00:18:58.441 "dhgroup": "ffdhe8192" 00:18:58.441 } 00:18:58.441 } 00:18:58.441 ]' 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.441 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.702 16:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.274 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.535 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.107 00:19:00.107 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.107 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.107 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.378 { 00:19:00.378 "cntlid": 47, 00:19:00.378 "qid": 0, 00:19:00.378 "state": "enabled", 00:19:00.378 "thread": "nvmf_tgt_poll_group_000", 00:19:00.378 "listen_address": { 00:19:00.378 "trtype": "TCP", 00:19:00.378 "adrfam": "IPv4", 00:19:00.378 "traddr": "10.0.0.2", 00:19:00.378 "trsvcid": "4420" 00:19:00.378 }, 00:19:00.378 "peer_address": { 00:19:00.378 "trtype": "TCP", 00:19:00.378 "adrfam": "IPv4", 00:19:00.378 "traddr": "10.0.0.1", 00:19:00.378 "trsvcid": "52768" 00:19:00.378 }, 00:19:00.378 "auth": { 00:19:00.378 "state": "completed", 00:19:00.378 "digest": "sha256", 00:19:00.378 "dhgroup": "ffdhe8192" 00:19:00.378 } 00:19:00.378 } 00:19:00.378 ]' 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.378 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.639 16:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.211 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.473 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.734 00:19:01.734 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.734 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.734 16:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.995 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.995 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.995 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.995 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.995 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.995 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.995 { 00:19:01.995 "cntlid": 49, 00:19:01.995 "qid": 0, 00:19:01.995 "state": "enabled", 00:19:01.995 "thread": "nvmf_tgt_poll_group_000", 00:19:01.995 "listen_address": { 00:19:01.995 "trtype": "TCP", 00:19:01.995 "adrfam": "IPv4", 00:19:01.995 "traddr": "10.0.0.2", 00:19:01.995 "trsvcid": "4420" 00:19:01.995 }, 00:19:01.995 "peer_address": { 00:19:01.995 "trtype": "TCP", 00:19:01.996 "adrfam": "IPv4", 00:19:01.996 "traddr": "10.0.0.1", 00:19:01.996 "trsvcid": "52800" 00:19:01.996 }, 00:19:01.996 "auth": { 00:19:01.996 "state": "completed", 00:19:01.996 "digest": "sha384", 00:19:01.996 "dhgroup": "null" 00:19:01.996 } 00:19:01.996 } 00:19:01.996 ]' 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.996 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.256 16:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:03.200 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.200 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.200 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.200 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.201 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.462 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.462 { 00:19:03.462 "cntlid": 51, 00:19:03.462 "qid": 0, 00:19:03.462 "state": "enabled", 00:19:03.462 "thread": "nvmf_tgt_poll_group_000", 00:19:03.462 "listen_address": { 00:19:03.462 "trtype": "TCP", 00:19:03.462 "adrfam": "IPv4", 00:19:03.462 "traddr": "10.0.0.2", 00:19:03.462 "trsvcid": "4420" 00:19:03.462 }, 00:19:03.462 "peer_address": { 00:19:03.462 "trtype": "TCP", 00:19:03.462 "adrfam": "IPv4", 00:19:03.462 "traddr": "10.0.0.1", 00:19:03.462 "trsvcid": "52816" 00:19:03.462 }, 00:19:03.462 "auth": { 00:19:03.462 "state": "completed", 00:19:03.462 "digest": "sha384", 00:19:03.462 "dhgroup": "null" 00:19:03.462 } 00:19:03.462 } 00:19:03.462 ]' 00:19:03.462 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.756 16:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.756 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:04.698 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.699 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.959 00:19:04.959 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.960 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.960 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.221 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.221 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.221 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.221 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.221 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.221 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.221 { 00:19:05.221 "cntlid": 53, 00:19:05.221 "qid": 0, 00:19:05.221 "state": "enabled", 00:19:05.221 "thread": "nvmf_tgt_poll_group_000", 00:19:05.221 "listen_address": { 00:19:05.221 "trtype": "TCP", 00:19:05.221 "adrfam": "IPv4", 00:19:05.221 "traddr": "10.0.0.2", 00:19:05.222 "trsvcid": "4420" 00:19:05.222 }, 00:19:05.222 "peer_address": { 00:19:05.222 "trtype": "TCP", 00:19:05.222 "adrfam": "IPv4", 00:19:05.222 "traddr": "10.0.0.1", 00:19:05.222 "trsvcid": "41840" 00:19:05.222 }, 00:19:05.222 "auth": { 00:19:05.222 "state": "completed", 00:19:05.222 "digest": "sha384", 00:19:05.222 "dhgroup": "null" 00:19:05.222 } 00:19:05.222 } 00:19:05.222 ]' 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.222 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.483 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.056 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.318 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.580 00:19:06.580 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.580 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.580 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.842 { 00:19:06.842 "cntlid": 55, 00:19:06.842 "qid": 0, 00:19:06.842 "state": "enabled", 00:19:06.842 "thread": "nvmf_tgt_poll_group_000", 00:19:06.842 "listen_address": { 00:19:06.842 "trtype": "TCP", 00:19:06.842 "adrfam": "IPv4", 00:19:06.842 "traddr": "10.0.0.2", 00:19:06.842 "trsvcid": "4420" 00:19:06.842 }, 00:19:06.842 "peer_address": { 00:19:06.842 "trtype": "TCP", 00:19:06.842 "adrfam": "IPv4", 00:19:06.842 "traddr": "10.0.0.1", 00:19:06.842 "trsvcid": "41866" 00:19:06.842 }, 00:19:06.842 "auth": { 00:19:06.842 "state": "completed", 00:19:06.842 "digest": "sha384", 00:19:06.842 "dhgroup": "null" 00:19:06.842 } 00:19:06.842 } 00:19:06.842 ]' 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.842 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.842 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.842 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.842 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.104 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:07.676 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.676 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.676 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.676 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.937 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.937 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.937 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.937 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:07.937 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:07.937 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:07.937 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.937 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.937 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.937 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.938 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.199 00:19:08.199 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.199 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.199 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.460 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.460 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.460 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.460 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.460 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.460 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.460 { 00:19:08.460 "cntlid": 57, 00:19:08.460 "qid": 0, 00:19:08.460 "state": "enabled", 00:19:08.460 "thread": "nvmf_tgt_poll_group_000", 00:19:08.460 "listen_address": { 00:19:08.460 "trtype": "TCP", 00:19:08.460 "adrfam": "IPv4", 00:19:08.460 "traddr": "10.0.0.2", 00:19:08.461 "trsvcid": "4420" 00:19:08.461 }, 00:19:08.461 "peer_address": { 00:19:08.461 "trtype": "TCP", 00:19:08.461 "adrfam": "IPv4", 00:19:08.461 "traddr": "10.0.0.1", 00:19:08.461 "trsvcid": "41886" 00:19:08.461 }, 00:19:08.461 "auth": { 00:19:08.461 "state": "completed", 00:19:08.461 "digest": "sha384", 00:19:08.461 "dhgroup": "ffdhe2048" 00:19:08.461 } 00:19:08.461 } 00:19:08.461 ]' 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.461 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.723 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:09.295 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.556 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.557 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.557 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.557 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.557 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.557 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.835 00:19:09.835 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.835 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.835 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.096 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.096 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.096 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.096 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.096 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.096 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.096 { 00:19:10.096 "cntlid": 59, 00:19:10.096 "qid": 0, 00:19:10.096 "state": "enabled", 00:19:10.096 "thread": "nvmf_tgt_poll_group_000", 00:19:10.096 "listen_address": { 00:19:10.096 "trtype": "TCP", 00:19:10.096 "adrfam": "IPv4", 00:19:10.096 "traddr": "10.0.0.2", 00:19:10.096 "trsvcid": "4420" 00:19:10.096 }, 00:19:10.097 "peer_address": { 00:19:10.097 "trtype": "TCP", 00:19:10.097 "adrfam": "IPv4", 00:19:10.097 "traddr": "10.0.0.1", 00:19:10.097 "trsvcid": "41902" 00:19:10.097 }, 00:19:10.097 "auth": { 00:19:10.097 "state": "completed", 00:19:10.097 "digest": "sha384", 00:19:10.097 "dhgroup": "ffdhe2048" 00:19:10.097 } 00:19:10.097 } 00:19:10.097 ]' 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.097 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.358 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.316 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.577 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.577 { 00:19:11.577 "cntlid": 61, 00:19:11.577 "qid": 0, 00:19:11.577 "state": "enabled", 00:19:11.577 "thread": "nvmf_tgt_poll_group_000", 00:19:11.577 "listen_address": { 00:19:11.577 "trtype": "TCP", 00:19:11.577 "adrfam": "IPv4", 00:19:11.577 "traddr": "10.0.0.2", 00:19:11.577 "trsvcid": "4420" 00:19:11.577 }, 00:19:11.577 "peer_address": { 00:19:11.577 "trtype": "TCP", 00:19:11.577 "adrfam": "IPv4", 00:19:11.577 "traddr": "10.0.0.1", 00:19:11.577 "trsvcid": "41930" 00:19:11.577 }, 00:19:11.577 "auth": { 00:19:11.577 "state": "completed", 00:19:11.577 "digest": "sha384", 00:19:11.577 "dhgroup": "ffdhe2048" 00:19:11.577 } 00:19:11.577 } 00:19:11.577 ]' 00:19:11.577 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.838 16:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.100 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.672 16:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.934 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.935 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.196 00:19:13.196 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.196 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.196 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.458 { 00:19:13.458 "cntlid": 63, 00:19:13.458 "qid": 0, 00:19:13.458 "state": "enabled", 00:19:13.458 "thread": "nvmf_tgt_poll_group_000", 00:19:13.458 "listen_address": { 00:19:13.458 "trtype": "TCP", 00:19:13.458 "adrfam": "IPv4", 00:19:13.458 "traddr": "10.0.0.2", 00:19:13.458 "trsvcid": "4420" 00:19:13.458 }, 00:19:13.458 "peer_address": { 00:19:13.458 "trtype": "TCP", 00:19:13.458 "adrfam": "IPv4", 00:19:13.458 "traddr": "10.0.0.1", 00:19:13.458 "trsvcid": "41952" 00:19:13.458 }, 00:19:13.458 "auth": { 00:19:13.458 "state": "completed", 00:19:13.458 "digest": "sha384", 00:19:13.458 "dhgroup": "ffdhe2048" 00:19:13.458 } 00:19:13.458 } 00:19:13.458 ]' 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.458 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.718 16:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:14.291 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.291 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.291 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.291 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.552 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.553 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.553 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.814 00:19:14.814 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.814 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.814 16:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.075 { 00:19:15.075 "cntlid": 65, 00:19:15.075 "qid": 0, 00:19:15.075 "state": "enabled", 00:19:15.075 "thread": "nvmf_tgt_poll_group_000", 00:19:15.075 "listen_address": { 00:19:15.075 "trtype": "TCP", 00:19:15.075 "adrfam": "IPv4", 00:19:15.075 "traddr": "10.0.0.2", 00:19:15.075 "trsvcid": "4420" 00:19:15.075 }, 00:19:15.075 "peer_address": { 00:19:15.075 "trtype": "TCP", 00:19:15.075 "adrfam": "IPv4", 00:19:15.075 "traddr": "10.0.0.1", 00:19:15.075 "trsvcid": "36720" 00:19:15.075 }, 00:19:15.075 "auth": { 00:19:15.075 "state": "completed", 00:19:15.075 "digest": "sha384", 00:19:15.075 "dhgroup": "ffdhe3072" 00:19:15.075 } 00:19:15.075 } 00:19:15.075 ]' 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.075 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.337 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:15.910 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.172 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.434 00:19:16.434 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.434 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.434 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.695 { 00:19:16.695 "cntlid": 67, 00:19:16.695 "qid": 0, 00:19:16.695 "state": "enabled", 00:19:16.695 "thread": "nvmf_tgt_poll_group_000", 00:19:16.695 "listen_address": { 00:19:16.695 "trtype": "TCP", 00:19:16.695 "adrfam": "IPv4", 00:19:16.695 "traddr": "10.0.0.2", 00:19:16.695 "trsvcid": "4420" 00:19:16.695 }, 00:19:16.695 "peer_address": { 00:19:16.695 "trtype": "TCP", 00:19:16.695 "adrfam": "IPv4", 00:19:16.695 "traddr": "10.0.0.1", 00:19:16.695 "trsvcid": "36742" 00:19:16.695 }, 00:19:16.695 "auth": { 00:19:16.695 "state": "completed", 00:19:16.695 "digest": "sha384", 00:19:16.695 "dhgroup": "ffdhe3072" 00:19:16.695 } 00:19:16.695 } 00:19:16.695 ]' 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.695 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.957 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.919 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.919 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.180 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.180 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.181 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.181 { 00:19:18.181 "cntlid": 69, 00:19:18.181 "qid": 0, 00:19:18.181 "state": "enabled", 00:19:18.181 "thread": "nvmf_tgt_poll_group_000", 00:19:18.181 "listen_address": { 00:19:18.181 "trtype": "TCP", 00:19:18.181 "adrfam": "IPv4", 00:19:18.181 "traddr": "10.0.0.2", 00:19:18.181 "trsvcid": "4420" 00:19:18.181 }, 00:19:18.181 "peer_address": { 00:19:18.181 "trtype": "TCP", 00:19:18.181 "adrfam": "IPv4", 00:19:18.181 "traddr": "10.0.0.1", 00:19:18.181 "trsvcid": "36768" 00:19:18.181 }, 00:19:18.181 "auth": { 00:19:18.181 "state": "completed", 00:19:18.181 "digest": "sha384", 00:19:18.181 "dhgroup": "ffdhe3072" 00:19:18.181 } 00:19:18.181 } 00:19:18.181 ]' 00:19:18.181 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.477 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.738 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.310 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.572 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.573 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.573 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.834 00:19:19.834 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.834 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.834 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.095 { 00:19:20.095 "cntlid": 71, 00:19:20.095 "qid": 0, 00:19:20.095 "state": "enabled", 00:19:20.095 "thread": "nvmf_tgt_poll_group_000", 00:19:20.095 "listen_address": { 00:19:20.095 "trtype": "TCP", 00:19:20.095 "adrfam": "IPv4", 00:19:20.095 "traddr": "10.0.0.2", 00:19:20.095 "trsvcid": "4420" 00:19:20.095 }, 00:19:20.095 "peer_address": { 00:19:20.095 "trtype": "TCP", 00:19:20.095 "adrfam": "IPv4", 00:19:20.095 "traddr": "10.0.0.1", 00:19:20.095 "trsvcid": "36802" 00:19:20.095 }, 00:19:20.095 "auth": { 00:19:20.095 "state": "completed", 00:19:20.095 "digest": "sha384", 00:19:20.095 "dhgroup": "ffdhe3072" 00:19:20.095 } 00:19:20.095 } 00:19:20.095 ]' 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.095 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.356 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:20.929 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.929 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.929 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.929 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.190 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.452 00:19:21.452 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.452 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.452 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.714 { 00:19:21.714 "cntlid": 73, 00:19:21.714 "qid": 0, 00:19:21.714 "state": "enabled", 00:19:21.714 "thread": "nvmf_tgt_poll_group_000", 00:19:21.714 "listen_address": { 00:19:21.714 "trtype": "TCP", 00:19:21.714 "adrfam": "IPv4", 00:19:21.714 "traddr": "10.0.0.2", 00:19:21.714 "trsvcid": "4420" 00:19:21.714 }, 00:19:21.714 "peer_address": { 00:19:21.714 "trtype": "TCP", 00:19:21.714 "adrfam": "IPv4", 00:19:21.714 "traddr": "10.0.0.1", 00:19:21.714 "trsvcid": "36850" 00:19:21.714 }, 00:19:21.714 "auth": { 00:19:21.714 "state": "completed", 00:19:21.714 "digest": "sha384", 00:19:21.714 "dhgroup": "ffdhe4096" 00:19:21.714 } 00:19:21.714 } 00:19:21.714 ]' 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.714 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.974 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:22.917 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.917 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.179 00:19:23.179 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.179 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.179 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.440 { 00:19:23.440 "cntlid": 75, 00:19:23.440 "qid": 0, 00:19:23.440 "state": "enabled", 00:19:23.440 "thread": "nvmf_tgt_poll_group_000", 00:19:23.440 "listen_address": { 00:19:23.440 "trtype": "TCP", 00:19:23.440 "adrfam": "IPv4", 00:19:23.440 "traddr": "10.0.0.2", 00:19:23.440 "trsvcid": "4420" 00:19:23.440 }, 00:19:23.440 "peer_address": { 00:19:23.440 "trtype": "TCP", 00:19:23.440 "adrfam": "IPv4", 00:19:23.440 "traddr": "10.0.0.1", 00:19:23.440 "trsvcid": "36886" 00:19:23.440 }, 00:19:23.440 "auth": { 00:19:23.440 "state": "completed", 00:19:23.440 "digest": "sha384", 00:19:23.440 "dhgroup": "ffdhe4096" 00:19:23.440 } 00:19:23.440 } 00:19:23.440 ]' 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.440 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.699 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:24.270 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.270 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.270 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.270 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.531 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.791 00:19:24.791 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.791 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.791 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.052 { 00:19:25.052 "cntlid": 77, 00:19:25.052 "qid": 0, 00:19:25.052 "state": "enabled", 00:19:25.052 "thread": "nvmf_tgt_poll_group_000", 00:19:25.052 "listen_address": { 00:19:25.052 "trtype": "TCP", 00:19:25.052 "adrfam": "IPv4", 00:19:25.052 "traddr": "10.0.0.2", 00:19:25.052 "trsvcid": "4420" 00:19:25.052 }, 00:19:25.052 "peer_address": { 00:19:25.052 "trtype": "TCP", 00:19:25.052 "adrfam": "IPv4", 00:19:25.052 "traddr": "10.0.0.1", 00:19:25.052 "trsvcid": "33666" 00:19:25.052 }, 00:19:25.052 "auth": { 00:19:25.052 "state": "completed", 00:19:25.052 "digest": "sha384", 00:19:25.052 "dhgroup": "ffdhe4096" 00:19:25.052 } 00:19:25.052 } 00:19:25.052 ]' 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.052 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.313 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.256 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.257 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.518 00:19:26.518 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.518 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.518 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.779 { 00:19:26.779 "cntlid": 79, 00:19:26.779 "qid": 0, 00:19:26.779 "state": "enabled", 00:19:26.779 "thread": "nvmf_tgt_poll_group_000", 00:19:26.779 "listen_address": { 00:19:26.779 "trtype": "TCP", 00:19:26.779 "adrfam": "IPv4", 00:19:26.779 "traddr": "10.0.0.2", 00:19:26.779 "trsvcid": "4420" 00:19:26.779 }, 00:19:26.779 "peer_address": { 00:19:26.779 "trtype": "TCP", 00:19:26.779 "adrfam": "IPv4", 00:19:26.779 "traddr": "10.0.0.1", 00:19:26.779 "trsvcid": "33700" 00:19:26.779 }, 00:19:26.779 "auth": { 00:19:26.779 "state": "completed", 00:19:26.779 "digest": "sha384", 00:19:26.779 "dhgroup": "ffdhe4096" 00:19:26.779 } 00:19:26.779 } 00:19:26.779 ]' 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.779 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.040 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.612 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.873 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.134 00:19:28.134 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.134 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.134 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.395 { 00:19:28.395 "cntlid": 81, 00:19:28.395 "qid": 0, 00:19:28.395 "state": "enabled", 00:19:28.395 "thread": "nvmf_tgt_poll_group_000", 00:19:28.395 "listen_address": { 00:19:28.395 "trtype": "TCP", 00:19:28.395 "adrfam": "IPv4", 00:19:28.395 "traddr": "10.0.0.2", 00:19:28.395 "trsvcid": "4420" 00:19:28.395 }, 00:19:28.395 "peer_address": { 00:19:28.395 "trtype": "TCP", 00:19:28.395 "adrfam": "IPv4", 00:19:28.395 "traddr": "10.0.0.1", 00:19:28.395 "trsvcid": "33718" 00:19:28.395 }, 00:19:28.395 "auth": { 00:19:28.395 "state": "completed", 00:19:28.395 "digest": "sha384", 00:19:28.395 "dhgroup": "ffdhe6144" 00:19:28.395 } 00:19:28.395 } 00:19:28.395 ]' 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.395 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.656 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:29.228 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.489 16:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.750 00:19:29.750 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.750 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.750 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.010 { 00:19:30.010 "cntlid": 83, 00:19:30.010 "qid": 0, 00:19:30.010 "state": "enabled", 00:19:30.010 "thread": "nvmf_tgt_poll_group_000", 00:19:30.010 "listen_address": { 00:19:30.010 "trtype": "TCP", 00:19:30.010 "adrfam": "IPv4", 00:19:30.010 "traddr": "10.0.0.2", 00:19:30.010 "trsvcid": "4420" 00:19:30.010 }, 00:19:30.010 "peer_address": { 00:19:30.010 "trtype": "TCP", 00:19:30.010 "adrfam": "IPv4", 00:19:30.010 "traddr": "10.0.0.1", 00:19:30.010 "trsvcid": "33744" 00:19:30.010 }, 00:19:30.010 "auth": { 00:19:30.010 "state": "completed", 00:19:30.010 "digest": "sha384", 00:19:30.010 "dhgroup": "ffdhe6144" 00:19:30.010 } 00:19:30.010 } 00:19:30.010 ]' 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.010 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.271 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.271 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.271 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.271 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.214 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.475 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.737 { 00:19:31.737 "cntlid": 85, 00:19:31.737 "qid": 0, 00:19:31.737 "state": "enabled", 00:19:31.737 "thread": "nvmf_tgt_poll_group_000", 00:19:31.737 "listen_address": { 00:19:31.737 "trtype": "TCP", 00:19:31.737 "adrfam": "IPv4", 00:19:31.737 "traddr": "10.0.0.2", 00:19:31.737 "trsvcid": "4420" 00:19:31.737 }, 00:19:31.737 "peer_address": { 00:19:31.737 "trtype": "TCP", 00:19:31.737 "adrfam": "IPv4", 00:19:31.737 "traddr": "10.0.0.1", 00:19:31.737 "trsvcid": "33784" 00:19:31.737 }, 00:19:31.737 "auth": { 00:19:31.737 "state": "completed", 00:19:31.737 "digest": "sha384", 00:19:31.737 "dhgroup": "ffdhe6144" 00:19:31.737 } 00:19:31.737 } 00:19:31.737 ]' 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.737 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.998 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.998 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.998 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.998 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.998 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.998 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:32.945 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.945 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.276 00:19:33.276 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.276 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.276 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.548 { 00:19:33.548 "cntlid": 87, 00:19:33.548 "qid": 0, 00:19:33.548 "state": "enabled", 00:19:33.548 "thread": "nvmf_tgt_poll_group_000", 00:19:33.548 "listen_address": { 00:19:33.548 "trtype": "TCP", 00:19:33.548 "adrfam": "IPv4", 00:19:33.548 "traddr": "10.0.0.2", 00:19:33.548 "trsvcid": "4420" 00:19:33.548 }, 00:19:33.548 "peer_address": { 00:19:33.548 "trtype": "TCP", 00:19:33.548 "adrfam": "IPv4", 00:19:33.548 "traddr": "10.0.0.1", 00:19:33.548 "trsvcid": "33812" 00:19:33.548 }, 00:19:33.548 "auth": { 00:19:33.548 "state": "completed", 00:19:33.548 "digest": "sha384", 00:19:33.548 "dhgroup": "ffdhe6144" 00:19:33.548 } 00:19:33.548 } 00:19:33.548 ]' 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.548 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.810 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.810 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.810 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.810 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.755 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.327 00:19:35.327 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.327 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.327 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.589 { 00:19:35.589 "cntlid": 89, 00:19:35.589 "qid": 0, 00:19:35.589 "state": "enabled", 00:19:35.589 "thread": "nvmf_tgt_poll_group_000", 00:19:35.589 "listen_address": { 00:19:35.589 "trtype": "TCP", 00:19:35.589 "adrfam": "IPv4", 00:19:35.589 "traddr": "10.0.0.2", 00:19:35.589 "trsvcid": "4420" 00:19:35.589 }, 00:19:35.589 "peer_address": { 00:19:35.589 "trtype": "TCP", 00:19:35.589 "adrfam": "IPv4", 00:19:35.589 "traddr": "10.0.0.1", 00:19:35.589 "trsvcid": "43528" 00:19:35.589 }, 00:19:35.589 "auth": { 00:19:35.589 "state": "completed", 00:19:35.589 "digest": "sha384", 00:19:35.589 "dhgroup": "ffdhe8192" 00:19:35.589 } 00:19:35.589 } 00:19:35.589 ]' 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.589 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.851 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:36.424 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.686 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.259 00:19:37.259 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.259 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.259 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.520 { 00:19:37.520 "cntlid": 91, 00:19:37.520 "qid": 0, 00:19:37.520 "state": "enabled", 00:19:37.520 "thread": "nvmf_tgt_poll_group_000", 00:19:37.520 "listen_address": { 00:19:37.520 "trtype": "TCP", 00:19:37.520 "adrfam": "IPv4", 00:19:37.520 "traddr": "10.0.0.2", 00:19:37.520 "trsvcid": "4420" 00:19:37.520 }, 00:19:37.520 "peer_address": { 00:19:37.520 "trtype": "TCP", 00:19:37.520 "adrfam": "IPv4", 00:19:37.520 "traddr": "10.0.0.1", 00:19:37.520 "trsvcid": "43558" 00:19:37.520 }, 00:19:37.520 "auth": { 00:19:37.520 "state": "completed", 00:19:37.520 "digest": "sha384", 00:19:37.520 "dhgroup": "ffdhe8192" 00:19:37.520 } 00:19:37.520 } 00:19:37.520 ]' 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.520 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.782 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:38.353 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.615 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.188 00:19:39.188 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.188 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.188 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.449 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.449 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.449 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.449 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.449 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.449 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.449 { 00:19:39.449 "cntlid": 93, 00:19:39.449 "qid": 0, 00:19:39.449 "state": "enabled", 00:19:39.449 "thread": "nvmf_tgt_poll_group_000", 00:19:39.449 "listen_address": { 00:19:39.449 "trtype": "TCP", 00:19:39.449 "adrfam": "IPv4", 00:19:39.449 "traddr": "10.0.0.2", 00:19:39.449 "trsvcid": "4420" 00:19:39.449 }, 00:19:39.449 "peer_address": { 00:19:39.449 "trtype": "TCP", 00:19:39.449 "adrfam": "IPv4", 00:19:39.449 "traddr": "10.0.0.1", 00:19:39.449 "trsvcid": "43596" 00:19:39.450 }, 00:19:39.450 "auth": { 00:19:39.450 "state": "completed", 00:19:39.450 "digest": "sha384", 00:19:39.450 "dhgroup": "ffdhe8192" 00:19:39.450 } 00:19:39.450 } 00:19:39.450 ]' 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.450 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.711 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.283 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.544 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.117 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.117 { 00:19:41.117 "cntlid": 95, 00:19:41.117 "qid": 0, 00:19:41.117 "state": "enabled", 00:19:41.117 "thread": "nvmf_tgt_poll_group_000", 00:19:41.117 "listen_address": { 00:19:41.117 "trtype": "TCP", 00:19:41.117 "adrfam": "IPv4", 00:19:41.117 "traddr": "10.0.0.2", 00:19:41.117 "trsvcid": "4420" 00:19:41.117 }, 00:19:41.117 "peer_address": { 00:19:41.117 "trtype": "TCP", 00:19:41.117 "adrfam": "IPv4", 00:19:41.117 "traddr": "10.0.0.1", 00:19:41.117 "trsvcid": "43620" 00:19:41.117 }, 00:19:41.117 "auth": { 00:19:41.117 "state": "completed", 00:19:41.117 "digest": "sha384", 00:19:41.117 "dhgroup": "ffdhe8192" 00:19:41.117 } 00:19:41.117 } 00:19:41.117 ]' 00:19:41.117 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.378 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.639 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.212 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.473 00:19:42.473 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.473 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.473 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.735 { 00:19:42.735 "cntlid": 97, 00:19:42.735 "qid": 0, 00:19:42.735 "state": "enabled", 00:19:42.735 "thread": "nvmf_tgt_poll_group_000", 00:19:42.735 "listen_address": { 00:19:42.735 "trtype": "TCP", 00:19:42.735 "adrfam": "IPv4", 00:19:42.735 "traddr": "10.0.0.2", 00:19:42.735 "trsvcid": "4420" 00:19:42.735 }, 00:19:42.735 "peer_address": { 00:19:42.735 "trtype": "TCP", 00:19:42.735 "adrfam": "IPv4", 00:19:42.735 "traddr": "10.0.0.1", 00:19:42.735 "trsvcid": "43642" 00:19:42.735 }, 00:19:42.735 "auth": { 00:19:42.735 "state": "completed", 00:19:42.735 "digest": "sha512", 00:19:42.735 "dhgroup": "null" 00:19:42.735 } 00:19:42.735 } 00:19:42.735 ]' 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.735 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.997 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.997 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.997 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.997 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:43.942 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.942 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.203 00:19:44.203 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.203 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.203 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.203 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.203 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.203 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.204 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.204 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.204 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.204 { 00:19:44.204 "cntlid": 99, 00:19:44.204 "qid": 0, 00:19:44.204 "state": "enabled", 00:19:44.204 "thread": "nvmf_tgt_poll_group_000", 00:19:44.204 "listen_address": { 00:19:44.204 "trtype": "TCP", 00:19:44.204 "adrfam": "IPv4", 00:19:44.204 "traddr": "10.0.0.2", 00:19:44.204 "trsvcid": "4420" 00:19:44.204 }, 00:19:44.204 "peer_address": { 00:19:44.204 "trtype": "TCP", 00:19:44.204 "adrfam": "IPv4", 00:19:44.204 "traddr": "10.0.0.1", 00:19:44.204 "trsvcid": "35018" 00:19:44.204 }, 00:19:44.204 "auth": { 00:19:44.204 "state": "completed", 00:19:44.204 "digest": "sha512", 00:19:44.204 "dhgroup": "null" 00:19:44.204 } 00:19:44.204 } 00:19:44.204 ]' 00:19:44.204 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.464 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.725 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:45.296 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:45.297 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.557 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.819 00:19:45.819 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.819 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.819 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.079 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.079 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.079 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.079 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.079 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.079 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.079 { 00:19:46.079 "cntlid": 101, 00:19:46.079 "qid": 0, 00:19:46.079 "state": "enabled", 00:19:46.079 "thread": "nvmf_tgt_poll_group_000", 00:19:46.079 "listen_address": { 00:19:46.079 "trtype": "TCP", 00:19:46.079 "adrfam": "IPv4", 00:19:46.079 "traddr": "10.0.0.2", 00:19:46.079 "trsvcid": "4420" 00:19:46.079 }, 00:19:46.079 "peer_address": { 00:19:46.079 "trtype": "TCP", 00:19:46.080 "adrfam": "IPv4", 00:19:46.080 "traddr": "10.0.0.1", 00:19:46.080 "trsvcid": "35046" 00:19:46.080 }, 00:19:46.080 "auth": { 00:19:46.080 "state": "completed", 00:19:46.080 "digest": "sha512", 00:19:46.080 "dhgroup": "null" 00:19:46.080 } 00:19:46.080 } 00:19:46.080 ]' 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.080 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.340 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:46.913 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.174 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.436 00:19:47.436 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.436 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.436 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.697 { 00:19:47.697 "cntlid": 103, 00:19:47.697 "qid": 0, 00:19:47.697 "state": "enabled", 00:19:47.697 "thread": "nvmf_tgt_poll_group_000", 00:19:47.697 "listen_address": { 00:19:47.697 "trtype": "TCP", 00:19:47.697 "adrfam": "IPv4", 00:19:47.697 "traddr": "10.0.0.2", 00:19:47.697 "trsvcid": "4420" 00:19:47.697 }, 00:19:47.697 "peer_address": { 00:19:47.697 "trtype": "TCP", 00:19:47.697 "adrfam": "IPv4", 00:19:47.697 "traddr": "10.0.0.1", 00:19:47.697 "trsvcid": "35074" 00:19:47.697 }, 00:19:47.697 "auth": { 00:19:47.697 "state": "completed", 00:19:47.697 "digest": "sha512", 00:19:47.697 "dhgroup": "null" 00:19:47.697 } 00:19:47.697 } 00:19:47.697 ]' 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.697 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.988 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:48.561 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.823 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.085 00:19:49.085 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.085 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.085 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.346 { 00:19:49.346 "cntlid": 105, 00:19:49.346 "qid": 0, 00:19:49.346 "state": "enabled", 00:19:49.346 "thread": "nvmf_tgt_poll_group_000", 00:19:49.346 "listen_address": { 00:19:49.346 "trtype": "TCP", 00:19:49.346 "adrfam": "IPv4", 00:19:49.346 "traddr": "10.0.0.2", 00:19:49.346 "trsvcid": "4420" 00:19:49.346 }, 00:19:49.346 "peer_address": { 00:19:49.346 "trtype": "TCP", 00:19:49.346 "adrfam": "IPv4", 00:19:49.346 "traddr": "10.0.0.1", 00:19:49.346 "trsvcid": "35092" 00:19:49.346 }, 00:19:49.346 "auth": { 00:19:49.346 "state": "completed", 00:19:49.346 "digest": "sha512", 00:19:49.346 "dhgroup": "ffdhe2048" 00:19:49.346 } 00:19:49.346 } 00:19:49.346 ]' 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.346 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.607 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.552 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.813 00:19:50.813 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.813 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.813 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.813 { 00:19:50.813 "cntlid": 107, 00:19:50.813 "qid": 0, 00:19:50.813 "state": "enabled", 00:19:50.813 "thread": "nvmf_tgt_poll_group_000", 00:19:50.813 "listen_address": { 00:19:50.813 "trtype": "TCP", 00:19:50.813 "adrfam": "IPv4", 00:19:50.813 "traddr": "10.0.0.2", 00:19:50.813 "trsvcid": "4420" 00:19:50.813 }, 00:19:50.813 "peer_address": { 00:19:50.813 "trtype": "TCP", 00:19:50.813 "adrfam": "IPv4", 00:19:50.813 "traddr": "10.0.0.1", 00:19:50.813 "trsvcid": "35118" 00:19:50.813 }, 00:19:50.813 "auth": { 00:19:50.813 "state": "completed", 00:19:50.813 "digest": "sha512", 00:19:50.813 "dhgroup": "ffdhe2048" 00:19:50.813 } 00:19:50.813 } 00:19:50.813 ]' 00:19:50.813 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.073 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.333 16:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.906 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.167 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.168 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.429 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.429 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.691 { 00:19:52.691 "cntlid": 109, 00:19:52.691 "qid": 0, 00:19:52.691 "state": "enabled", 00:19:52.691 "thread": "nvmf_tgt_poll_group_000", 00:19:52.691 "listen_address": { 00:19:52.691 "trtype": "TCP", 00:19:52.691 "adrfam": "IPv4", 00:19:52.691 "traddr": "10.0.0.2", 00:19:52.691 "trsvcid": "4420" 00:19:52.691 }, 00:19:52.691 "peer_address": { 00:19:52.691 "trtype": "TCP", 00:19:52.691 "adrfam": "IPv4", 00:19:52.691 "traddr": "10.0.0.1", 00:19:52.691 "trsvcid": "35158" 00:19:52.691 }, 00:19:52.691 "auth": { 00:19:52.691 "state": "completed", 00:19:52.691 "digest": "sha512", 00:19:52.691 "dhgroup": "ffdhe2048" 00:19:52.691 } 00:19:52.691 } 00:19:52.691 ]' 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.691 16:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.952 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.524 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.785 16:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.046 00:19:54.046 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.046 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.046 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.308 { 00:19:54.308 "cntlid": 111, 00:19:54.308 "qid": 0, 00:19:54.308 "state": "enabled", 00:19:54.308 "thread": "nvmf_tgt_poll_group_000", 00:19:54.308 "listen_address": { 00:19:54.308 "trtype": "TCP", 00:19:54.308 "adrfam": "IPv4", 00:19:54.308 "traddr": "10.0.0.2", 00:19:54.308 "trsvcid": "4420" 00:19:54.308 }, 00:19:54.308 "peer_address": { 00:19:54.308 "trtype": "TCP", 00:19:54.308 "adrfam": "IPv4", 00:19:54.308 "traddr": "10.0.0.1", 00:19:54.308 "trsvcid": "33170" 00:19:54.308 }, 00:19:54.308 "auth": { 00:19:54.308 "state": "completed", 00:19:54.308 "digest": "sha512", 00:19:54.308 "dhgroup": "ffdhe2048" 00:19:54.308 } 00:19:54.308 } 00:19:54.308 ]' 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.308 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.569 16:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:55.141 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.402 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.663 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.663 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.924 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.924 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.924 { 00:19:55.924 "cntlid": 113, 00:19:55.924 "qid": 0, 00:19:55.924 "state": "enabled", 00:19:55.924 "thread": "nvmf_tgt_poll_group_000", 00:19:55.924 "listen_address": { 00:19:55.924 "trtype": "TCP", 00:19:55.924 "adrfam": "IPv4", 00:19:55.924 "traddr": "10.0.0.2", 00:19:55.924 "trsvcid": "4420" 00:19:55.924 }, 00:19:55.924 "peer_address": { 00:19:55.924 "trtype": "TCP", 00:19:55.924 "adrfam": "IPv4", 00:19:55.924 "traddr": "10.0.0.1", 00:19:55.924 "trsvcid": "33190" 00:19:55.924 }, 00:19:55.924 "auth": { 00:19:55.924 "state": "completed", 00:19:55.924 "digest": "sha512", 00:19:55.924 "dhgroup": "ffdhe3072" 00:19:55.924 } 00:19:55.924 } 00:19:55.924 ]' 00:19:55.924 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.924 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.924 16:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.924 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.924 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.924 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.924 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.924 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.924 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:56.868 16:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.868 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.129 00:19:57.130 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.130 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.130 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.391 { 00:19:57.391 "cntlid": 115, 00:19:57.391 "qid": 0, 00:19:57.391 "state": "enabled", 00:19:57.391 "thread": "nvmf_tgt_poll_group_000", 00:19:57.391 "listen_address": { 00:19:57.391 "trtype": "TCP", 00:19:57.391 "adrfam": "IPv4", 00:19:57.391 "traddr": "10.0.0.2", 00:19:57.391 "trsvcid": "4420" 00:19:57.391 }, 00:19:57.391 "peer_address": { 00:19:57.391 "trtype": "TCP", 00:19:57.391 "adrfam": "IPv4", 00:19:57.391 "traddr": "10.0.0.1", 00:19:57.391 "trsvcid": "33220" 00:19:57.391 }, 00:19:57.391 "auth": { 00:19:57.391 "state": "completed", 00:19:57.391 "digest": "sha512", 00:19:57.391 "dhgroup": "ffdhe3072" 00:19:57.391 } 00:19:57.391 } 00:19:57.391 ]' 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.391 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.651 16:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.595 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.857 00:19:58.857 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.857 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.857 16:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.857 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.857 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.857 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.857 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.857 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.857 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.857 { 00:19:58.857 "cntlid": 117, 00:19:58.857 "qid": 0, 00:19:58.857 "state": "enabled", 00:19:58.857 "thread": "nvmf_tgt_poll_group_000", 00:19:58.857 "listen_address": { 00:19:58.857 "trtype": "TCP", 00:19:58.857 "adrfam": "IPv4", 00:19:58.857 "traddr": "10.0.0.2", 00:19:58.857 "trsvcid": "4420" 00:19:58.857 }, 00:19:58.857 "peer_address": { 00:19:58.857 "trtype": "TCP", 00:19:58.857 "adrfam": "IPv4", 00:19:58.857 "traddr": "10.0.0.1", 00:19:58.857 "trsvcid": "33244" 00:19:58.857 }, 00:19:58.857 "auth": { 00:19:58.857 "state": "completed", 00:19:58.857 "digest": "sha512", 00:19:58.857 "dhgroup": "ffdhe3072" 00:19:58.857 } 00:19:58.857 } 00:19:58.857 ]' 00:19:59.118 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.118 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.118 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.118 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.119 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.119 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.119 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.119 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.380 16:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.952 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.214 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.474 00:20:00.474 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.474 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.474 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.736 { 00:20:00.736 "cntlid": 119, 00:20:00.736 "qid": 0, 00:20:00.736 "state": "enabled", 00:20:00.736 "thread": "nvmf_tgt_poll_group_000", 00:20:00.736 "listen_address": { 00:20:00.736 "trtype": "TCP", 00:20:00.736 "adrfam": "IPv4", 00:20:00.736 "traddr": "10.0.0.2", 00:20:00.736 "trsvcid": "4420" 00:20:00.736 }, 00:20:00.736 "peer_address": { 00:20:00.736 "trtype": "TCP", 00:20:00.736 "adrfam": "IPv4", 00:20:00.736 "traddr": "10.0.0.1", 00:20:00.736 "trsvcid": "33280" 00:20:00.736 }, 00:20:00.736 "auth": { 00:20:00.736 "state": "completed", 00:20:00.736 "digest": "sha512", 00:20:00.736 "dhgroup": "ffdhe3072" 00:20:00.736 } 00:20:00.736 } 00:20:00.736 ]' 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.736 16:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.998 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:20:01.570 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.570 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.570 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.570 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.832 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.094 00:20:02.094 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.094 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.094 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.359 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.359 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.359 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.359 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.359 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.359 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.360 { 00:20:02.360 "cntlid": 121, 00:20:02.360 "qid": 0, 00:20:02.360 "state": "enabled", 00:20:02.360 "thread": "nvmf_tgt_poll_group_000", 00:20:02.360 "listen_address": { 00:20:02.360 "trtype": "TCP", 00:20:02.360 "adrfam": "IPv4", 00:20:02.360 "traddr": "10.0.0.2", 00:20:02.360 "trsvcid": "4420" 00:20:02.360 }, 00:20:02.360 "peer_address": { 00:20:02.360 "trtype": "TCP", 00:20:02.360 "adrfam": "IPv4", 00:20:02.360 "traddr": "10.0.0.1", 00:20:02.360 "trsvcid": "33312" 00:20:02.360 }, 00:20:02.360 "auth": { 00:20:02.360 "state": "completed", 00:20:02.360 "digest": "sha512", 00:20:02.360 "dhgroup": "ffdhe4096" 00:20:02.360 } 00:20:02.360 } 00:20:02.360 ]' 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.360 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.690 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.274 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.540 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.800 00:20:03.800 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.801 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.801 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.801 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.801 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.801 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.801 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.062 { 00:20:04.062 "cntlid": 123, 00:20:04.062 "qid": 0, 00:20:04.062 "state": "enabled", 00:20:04.062 "thread": "nvmf_tgt_poll_group_000", 00:20:04.062 "listen_address": { 00:20:04.062 "trtype": "TCP", 00:20:04.062 "adrfam": "IPv4", 00:20:04.062 "traddr": "10.0.0.2", 00:20:04.062 "trsvcid": "4420" 00:20:04.062 }, 00:20:04.062 "peer_address": { 00:20:04.062 "trtype": "TCP", 00:20:04.062 "adrfam": "IPv4", 00:20:04.062 "traddr": "10.0.0.1", 00:20:04.062 "trsvcid": "38562" 00:20:04.062 }, 00:20:04.062 "auth": { 00:20:04.062 "state": "completed", 00:20:04.062 "digest": "sha512", 00:20:04.062 "dhgroup": "ffdhe4096" 00:20:04.062 } 00:20:04.062 } 00:20:04.062 ]' 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.062 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.324 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.897 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.897 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.159 00:20:05.159 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.159 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.159 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.421 { 00:20:05.421 "cntlid": 125, 00:20:05.421 "qid": 0, 00:20:05.421 "state": "enabled", 00:20:05.421 "thread": "nvmf_tgt_poll_group_000", 00:20:05.421 "listen_address": { 00:20:05.421 "trtype": "TCP", 00:20:05.421 "adrfam": "IPv4", 00:20:05.421 "traddr": "10.0.0.2", 00:20:05.421 "trsvcid": "4420" 00:20:05.421 }, 00:20:05.421 "peer_address": { 00:20:05.421 "trtype": "TCP", 00:20:05.421 "adrfam": "IPv4", 00:20:05.421 "traddr": "10.0.0.1", 00:20:05.421 "trsvcid": "38594" 00:20:05.421 }, 00:20:05.421 "auth": { 00:20:05.421 "state": "completed", 00:20:05.421 "digest": "sha512", 00:20:05.421 "dhgroup": "ffdhe4096" 00:20:05.421 } 00:20:05.421 } 00:20:05.421 ]' 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.421 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.682 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.682 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.682 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.682 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.682 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.682 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:20:06.626 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.626 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.626 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.626 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.626 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.626 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.627 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.888 00:20:06.888 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.888 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.888 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.150 { 00:20:07.150 "cntlid": 127, 00:20:07.150 "qid": 0, 00:20:07.150 "state": "enabled", 00:20:07.150 "thread": "nvmf_tgt_poll_group_000", 00:20:07.150 "listen_address": { 00:20:07.150 "trtype": "TCP", 00:20:07.150 "adrfam": "IPv4", 00:20:07.150 "traddr": "10.0.0.2", 00:20:07.150 "trsvcid": "4420" 00:20:07.150 }, 00:20:07.150 "peer_address": { 00:20:07.150 "trtype": "TCP", 00:20:07.150 "adrfam": "IPv4", 00:20:07.150 "traddr": "10.0.0.1", 00:20:07.150 "trsvcid": "38622" 00:20:07.150 }, 00:20:07.150 "auth": { 00:20:07.150 "state": "completed", 00:20:07.150 "digest": "sha512", 00:20:07.150 "dhgroup": "ffdhe4096" 00:20:07.150 } 00:20:07.150 } 00:20:07.150 ]' 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.150 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.411 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.356 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.617 00:20:08.617 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.617 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.617 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.879 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.879 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.879 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.879 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.879 { 00:20:08.879 "cntlid": 129, 00:20:08.879 "qid": 0, 00:20:08.879 "state": "enabled", 00:20:08.879 "thread": "nvmf_tgt_poll_group_000", 00:20:08.879 "listen_address": { 00:20:08.879 "trtype": "TCP", 00:20:08.879 "adrfam": "IPv4", 00:20:08.879 "traddr": "10.0.0.2", 00:20:08.879 "trsvcid": "4420" 00:20:08.879 }, 00:20:08.879 "peer_address": { 00:20:08.879 "trtype": "TCP", 00:20:08.879 "adrfam": "IPv4", 00:20:08.879 "traddr": "10.0.0.1", 00:20:08.879 "trsvcid": "38642" 00:20:08.879 }, 00:20:08.879 "auth": { 00:20:08.879 "state": "completed", 00:20:08.879 "digest": "sha512", 00:20:08.879 "dhgroup": "ffdhe6144" 00:20:08.879 } 00:20:08.879 } 00:20:08.879 ]' 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.879 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.140 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.085 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.347 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.607 { 00:20:10.607 "cntlid": 131, 00:20:10.607 "qid": 0, 00:20:10.607 "state": "enabled", 00:20:10.607 "thread": "nvmf_tgt_poll_group_000", 00:20:10.607 "listen_address": { 00:20:10.607 "trtype": "TCP", 00:20:10.607 "adrfam": "IPv4", 00:20:10.607 "traddr": "10.0.0.2", 00:20:10.607 "trsvcid": "4420" 00:20:10.607 }, 00:20:10.607 "peer_address": { 00:20:10.607 "trtype": "TCP", 00:20:10.607 "adrfam": "IPv4", 00:20:10.607 "traddr": "10.0.0.1", 00:20:10.607 "trsvcid": "38670" 00:20:10.607 }, 00:20:10.607 "auth": { 00:20:10.607 "state": "completed", 00:20:10.607 "digest": "sha512", 00:20:10.607 "dhgroup": "ffdhe6144" 00:20:10.607 } 00:20:10.607 } 00:20:10.607 ]' 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.607 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.869 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.869 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.869 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.869 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.869 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.869 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.812 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.812 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.386 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.386 { 00:20:12.386 "cntlid": 133, 00:20:12.386 "qid": 0, 00:20:12.386 "state": "enabled", 00:20:12.386 "thread": "nvmf_tgt_poll_group_000", 00:20:12.386 "listen_address": { 00:20:12.386 "trtype": "TCP", 00:20:12.386 "adrfam": "IPv4", 00:20:12.386 "traddr": "10.0.0.2", 00:20:12.386 "trsvcid": "4420" 00:20:12.386 }, 00:20:12.386 "peer_address": { 00:20:12.386 "trtype": "TCP", 00:20:12.386 "adrfam": "IPv4", 00:20:12.386 "traddr": "10.0.0.1", 00:20:12.386 "trsvcid": "38706" 00:20:12.386 }, 00:20:12.386 "auth": { 00:20:12.386 "state": "completed", 00:20:12.386 "digest": "sha512", 00:20:12.386 "dhgroup": "ffdhe6144" 00:20:12.386 } 00:20:12.386 } 00:20:12.386 ]' 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.386 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.648 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.648 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.648 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.648 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.603 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.864 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.126 { 00:20:14.126 "cntlid": 135, 00:20:14.126 "qid": 0, 00:20:14.126 "state": "enabled", 00:20:14.126 "thread": "nvmf_tgt_poll_group_000", 00:20:14.126 "listen_address": { 00:20:14.126 "trtype": "TCP", 00:20:14.126 "adrfam": "IPv4", 00:20:14.126 "traddr": "10.0.0.2", 00:20:14.126 "trsvcid": "4420" 00:20:14.126 }, 00:20:14.126 "peer_address": { 00:20:14.126 "trtype": "TCP", 00:20:14.126 "adrfam": "IPv4", 00:20:14.126 "traddr": "10.0.0.1", 00:20:14.126 "trsvcid": "38280" 00:20:14.126 }, 00:20:14.126 "auth": { 00:20:14.126 "state": "completed", 00:20:14.126 "digest": "sha512", 00:20:14.126 "dhgroup": "ffdhe6144" 00:20:14.126 } 00:20:14.126 } 00:20:14.126 ]' 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.126 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.388 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.388 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.388 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.388 16:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:20:15.332 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.332 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.333 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.905 00:20:15.905 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.905 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.905 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.166 { 00:20:16.166 "cntlid": 137, 00:20:16.166 "qid": 0, 00:20:16.166 "state": "enabled", 00:20:16.166 "thread": "nvmf_tgt_poll_group_000", 00:20:16.166 "listen_address": { 00:20:16.166 "trtype": "TCP", 00:20:16.166 "adrfam": "IPv4", 00:20:16.166 "traddr": "10.0.0.2", 00:20:16.166 "trsvcid": "4420" 00:20:16.166 }, 00:20:16.166 "peer_address": { 00:20:16.166 "trtype": "TCP", 00:20:16.166 "adrfam": "IPv4", 00:20:16.166 "traddr": "10.0.0.1", 00:20:16.166 "trsvcid": "38304" 00:20:16.166 }, 00:20:16.166 "auth": { 00:20:16.166 "state": "completed", 00:20:16.166 "digest": "sha512", 00:20:16.166 "dhgroup": "ffdhe8192" 00:20:16.166 } 00:20:16.166 } 00:20:16.166 ]' 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.166 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.428 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:20:17.002 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.263 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.855 00:20:17.855 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.855 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.855 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.156 { 00:20:18.156 "cntlid": 139, 00:20:18.156 "qid": 0, 00:20:18.156 "state": "enabled", 00:20:18.156 "thread": "nvmf_tgt_poll_group_000", 00:20:18.156 "listen_address": { 00:20:18.156 "trtype": "TCP", 00:20:18.156 "adrfam": "IPv4", 00:20:18.156 "traddr": "10.0.0.2", 00:20:18.156 "trsvcid": "4420" 00:20:18.156 }, 00:20:18.156 "peer_address": { 00:20:18.156 "trtype": "TCP", 00:20:18.156 "adrfam": "IPv4", 00:20:18.156 "traddr": "10.0.0.1", 00:20:18.156 "trsvcid": "38344" 00:20:18.156 }, 00:20:18.156 "auth": { 00:20:18.156 "state": "completed", 00:20:18.156 "digest": "sha512", 00:20:18.156 "dhgroup": "ffdhe8192" 00:20:18.156 } 00:20:18.156 } 00:20:18.156 ]' 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.156 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWNiYWJkMmQ4ODVhNTUyNTQ0OTM5YTFlNTEyZjQ4MTF2KG61: --dhchap-ctrl-secret DHHC-1:02:Y2M3MjJmY2MxNjhjYmExMDVhZjc1MWZmYjM4OTdkZTY3ZmIyMGVjODViYzNiYTA2NQZQZQ==: 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.100 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.671 00:20:19.671 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.671 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.671 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.933 { 00:20:19.933 "cntlid": 141, 00:20:19.933 "qid": 0, 00:20:19.933 "state": "enabled", 00:20:19.933 "thread": "nvmf_tgt_poll_group_000", 00:20:19.933 "listen_address": { 00:20:19.933 "trtype": "TCP", 00:20:19.933 "adrfam": "IPv4", 00:20:19.933 "traddr": "10.0.0.2", 00:20:19.933 "trsvcid": "4420" 00:20:19.933 }, 00:20:19.933 "peer_address": { 00:20:19.933 "trtype": "TCP", 00:20:19.933 "adrfam": "IPv4", 00:20:19.933 "traddr": "10.0.0.1", 00:20:19.933 "trsvcid": "38380" 00:20:19.933 }, 00:20:19.933 "auth": { 00:20:19.933 "state": "completed", 00:20:19.933 "digest": "sha512", 00:20:19.933 "dhgroup": "ffdhe8192" 00:20:19.933 } 00:20:19.933 } 00:20:19.933 ]' 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.933 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.195 16:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2M5YzI3OGJjOTEzYzYwNjdiMjI4MzJhZDM2MjQxNGNmNDJjMzIwZmQ2ZmY1NDUwyEXkhg==: --dhchap-ctrl-secret DHHC-1:01:OGRiNGQ2NDk2MzY3YTI1YzdmM2QyMzlhN2UxNWNlMTJJihBh: 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.155 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.727 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.727 { 00:20:21.727 "cntlid": 143, 00:20:21.727 "qid": 0, 00:20:21.727 "state": "enabled", 00:20:21.727 "thread": "nvmf_tgt_poll_group_000", 00:20:21.727 "listen_address": { 00:20:21.727 "trtype": "TCP", 00:20:21.727 "adrfam": "IPv4", 00:20:21.727 "traddr": "10.0.0.2", 00:20:21.727 "trsvcid": "4420" 00:20:21.727 }, 00:20:21.727 "peer_address": { 00:20:21.727 "trtype": "TCP", 00:20:21.727 "adrfam": "IPv4", 00:20:21.727 "traddr": "10.0.0.1", 00:20:21.727 "trsvcid": "38402" 00:20:21.727 }, 00:20:21.727 "auth": { 00:20:21.727 "state": "completed", 00:20:21.727 "digest": "sha512", 00:20:21.727 "dhgroup": "ffdhe8192" 00:20:21.727 } 00:20:21.727 } 00:20:21.727 ]' 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.727 16:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.988 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:20:22.931 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.931 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.931 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.931 16:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.931 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.501 00:20:23.501 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.502 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.502 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.761 { 00:20:23.761 "cntlid": 145, 00:20:23.761 "qid": 0, 00:20:23.761 "state": "enabled", 00:20:23.761 "thread": "nvmf_tgt_poll_group_000", 00:20:23.761 "listen_address": { 00:20:23.761 "trtype": "TCP", 00:20:23.761 "adrfam": "IPv4", 00:20:23.761 "traddr": "10.0.0.2", 00:20:23.761 "trsvcid": "4420" 00:20:23.761 }, 00:20:23.761 "peer_address": { 00:20:23.761 "trtype": "TCP", 00:20:23.761 "adrfam": "IPv4", 00:20:23.761 "traddr": "10.0.0.1", 00:20:23.761 "trsvcid": "38442" 00:20:23.761 }, 00:20:23.761 "auth": { 00:20:23.761 "state": "completed", 00:20:23.761 "digest": "sha512", 00:20:23.761 "dhgroup": "ffdhe8192" 00:20:23.761 } 00:20:23.761 } 00:20:23.761 ]' 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.761 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.761 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.761 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.761 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.022 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NmMzMjNjZDJmYmViOWVhYTkxOWI5NzYyNzM5YmU4YzhiZTg0OWE1OGNiYTk4YjFh6wHgIg==: --dhchap-ctrl-secret DHHC-1:03:ZDlhNzJhODQ1YjBhYTAxOWJmM2ZjMDI5ZTc1OTg1YWEwMWQ1Mjg2MzgxNTUwMjQzZmI1MzcxYjQwOTk3MzM1YpeyWPI=: 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.966 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:25.227 request: 00:20:25.227 { 00:20:25.227 "name": "nvme0", 00:20:25.227 "trtype": "tcp", 00:20:25.227 "traddr": "10.0.0.2", 00:20:25.227 "adrfam": "ipv4", 00:20:25.227 "trsvcid": "4420", 00:20:25.227 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:25.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:25.227 "prchk_reftag": false, 00:20:25.227 "prchk_guard": false, 00:20:25.227 "hdgst": false, 00:20:25.227 "ddgst": false, 00:20:25.227 "dhchap_key": "key2", 00:20:25.227 "method": "bdev_nvme_attach_controller", 00:20:25.227 "req_id": 1 00:20:25.227 } 00:20:25.227 Got JSON-RPC error response 00:20:25.227 response: 00:20:25.227 { 00:20:25.227 "code": -5, 00:20:25.227 "message": "Input/output error" 00:20:25.227 } 00:20:25.227 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.228 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.800 request: 00:20:25.800 { 00:20:25.800 "name": "nvme0", 00:20:25.800 "trtype": "tcp", 00:20:25.800 "traddr": "10.0.0.2", 00:20:25.800 "adrfam": "ipv4", 00:20:25.800 "trsvcid": "4420", 00:20:25.800 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:25.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:25.800 "prchk_reftag": false, 00:20:25.800 "prchk_guard": false, 00:20:25.800 "hdgst": false, 00:20:25.800 "ddgst": false, 00:20:25.800 "dhchap_key": "key1", 00:20:25.800 "dhchap_ctrlr_key": "ckey2", 00:20:25.800 "method": "bdev_nvme_attach_controller", 00:20:25.800 "req_id": 1 00:20:25.800 } 00:20:25.800 Got JSON-RPC error response 00:20:25.800 response: 00:20:25.800 { 00:20:25.800 "code": -5, 00:20:25.800 "message": "Input/output error" 00:20:25.800 } 00:20:25.800 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:25.800 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.800 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.800 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.800 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.800 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.801 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.801 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.374 request: 00:20:26.374 { 00:20:26.374 "name": "nvme0", 00:20:26.374 "trtype": "tcp", 00:20:26.374 "traddr": "10.0.0.2", 00:20:26.374 "adrfam": "ipv4", 00:20:26.374 "trsvcid": "4420", 00:20:26.374 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:26.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.374 "prchk_reftag": false, 00:20:26.374 "prchk_guard": false, 00:20:26.374 "hdgst": false, 00:20:26.374 "ddgst": false, 00:20:26.374 "dhchap_key": "key1", 00:20:26.374 "dhchap_ctrlr_key": "ckey1", 00:20:26.374 "method": "bdev_nvme_attach_controller", 00:20:26.374 "req_id": 1 00:20:26.374 } 00:20:26.374 Got JSON-RPC error response 00:20:26.374 response: 00:20:26.374 { 00:20:26.374 "code": -5, 00:20:26.374 "message": "Input/output error" 00:20:26.374 } 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1422636 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1422636 ']' 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1422636 00:20:26.374 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1422636 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1422636' 00:20:26.375 killing process with pid 1422636 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1422636 00:20:26.375 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1422636 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1448861 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1448861 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1448861 ']' 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.644 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1448861 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1448861 ']' 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.588 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.589 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.589 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.589 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.589 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.161 00:20:28.162 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.162 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.162 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.423 { 00:20:28.423 "cntlid": 1, 00:20:28.423 "qid": 0, 00:20:28.423 "state": "enabled", 00:20:28.423 "thread": "nvmf_tgt_poll_group_000", 00:20:28.423 "listen_address": { 00:20:28.423 "trtype": "TCP", 00:20:28.423 "adrfam": "IPv4", 00:20:28.423 "traddr": "10.0.0.2", 00:20:28.423 "trsvcid": "4420" 00:20:28.423 }, 00:20:28.423 "peer_address": { 00:20:28.423 "trtype": "TCP", 00:20:28.423 "adrfam": "IPv4", 00:20:28.423 "traddr": "10.0.0.1", 00:20:28.423 "trsvcid": "57190" 00:20:28.423 }, 00:20:28.423 "auth": { 00:20:28.423 "state": "completed", 00:20:28.423 "digest": "sha512", 00:20:28.423 "dhgroup": "ffdhe8192" 00:20:28.423 } 00:20:28.423 } 00:20:28.423 ]' 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.423 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.683 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MjA1MjVlNWZlYTMzNmE5NDFmZjE2MGZmMjE5YTc4ZjhlOTNhZjIxYjViOTY3NDUwZTE4NTI5ODQwMmI2ZmIxZRRDZUE=: 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.627 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.889 request: 00:20:29.889 { 00:20:29.889 "name": "nvme0", 00:20:29.889 "trtype": "tcp", 00:20:29.889 "traddr": "10.0.0.2", 00:20:29.889 "adrfam": "ipv4", 00:20:29.889 "trsvcid": "4420", 00:20:29.889 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:29.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.889 "prchk_reftag": false, 00:20:29.889 "prchk_guard": false, 00:20:29.889 "hdgst": false, 00:20:29.889 "ddgst": false, 00:20:29.889 "dhchap_key": "key3", 00:20:29.889 "method": "bdev_nvme_attach_controller", 00:20:29.889 "req_id": 1 00:20:29.889 } 00:20:29.889 Got JSON-RPC error response 00:20:29.889 response: 00:20:29.889 { 00:20:29.889 "code": -5, 00:20:29.889 "message": "Input/output error" 00:20:29.889 } 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:29.889 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.889 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.150 request: 00:20:30.150 { 00:20:30.150 "name": "nvme0", 00:20:30.150 "trtype": "tcp", 00:20:30.150 "traddr": "10.0.0.2", 00:20:30.150 "adrfam": "ipv4", 00:20:30.150 "trsvcid": "4420", 00:20:30.150 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:30.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.150 "prchk_reftag": false, 00:20:30.150 "prchk_guard": false, 00:20:30.150 "hdgst": false, 00:20:30.150 "ddgst": false, 00:20:30.150 "dhchap_key": "key3", 00:20:30.150 "method": "bdev_nvme_attach_controller", 00:20:30.150 "req_id": 1 00:20:30.150 } 00:20:30.150 Got JSON-RPC error response 00:20:30.150 response: 00:20:30.150 { 00:20:30.150 "code": -5, 00:20:30.150 "message": "Input/output error" 00:20:30.150 } 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:30.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:30.151 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:30.151 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:30.413 request: 00:20:30.413 { 00:20:30.413 "name": "nvme0", 00:20:30.413 "trtype": "tcp", 00:20:30.413 "traddr": "10.0.0.2", 00:20:30.413 "adrfam": "ipv4", 00:20:30.413 "trsvcid": "4420", 00:20:30.413 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:30.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.413 "prchk_reftag": false, 00:20:30.413 "prchk_guard": false, 00:20:30.413 "hdgst": false, 00:20:30.413 "ddgst": false, 00:20:30.413 "dhchap_key": "key0", 00:20:30.413 "dhchap_ctrlr_key": "key1", 00:20:30.413 "method": "bdev_nvme_attach_controller", 00:20:30.413 "req_id": 1 00:20:30.413 } 00:20:30.413 Got JSON-RPC error response 00:20:30.413 response: 00:20:30.413 { 00:20:30.413 "code": -5, 00:20:30.413 "message": "Input/output error" 00:20:30.413 } 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:30.413 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:30.674 00:20:30.674 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:30.674 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:30.674 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.936 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.936 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.936 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1422719 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1422719 ']' 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1422719 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.936 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1422719 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1422719' 00:20:31.197 killing process with pid 1422719 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1422719 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1422719 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.197 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.197 rmmod nvme_tcp 00:20:31.197 rmmod nvme_fabrics 00:20:31.197 rmmod nvme_keyring 00:20:31.458 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.458 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:31.458 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:31.458 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1448861 ']' 00:20:31.458 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1448861 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1448861 ']' 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1448861 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1448861 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1448861' 00:20:31.459 killing process with pid 1448861 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1448861 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1448861 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.459 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cUt /tmp/spdk.key-sha256.l1W /tmp/spdk.key-sha384.pii /tmp/spdk.key-sha512.dX3 /tmp/spdk.key-sha512.VS7 /tmp/spdk.key-sha384.1IJ /tmp/spdk.key-sha256.udo '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:34.068 00:20:34.068 real 2m23.514s 00:20:34.068 user 5m19.684s 00:20:34.068 sys 0m21.316s 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.068 ************************************ 00:20:34.068 END TEST nvmf_auth_target 00:20:34.068 ************************************ 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.068 16:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:34.068 ************************************ 00:20:34.069 START TEST nvmf_bdevio_no_huge 00:20:34.069 ************************************ 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:34.069 * Looking for test storage... 00:20:34.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.069 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.664 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:40.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:40.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:40.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:40.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:20:40.665 00:20:40.665 --- 10.0.0.2 ping statistics --- 00:20:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.665 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:20:40.665 00:20:40.665 --- 10.0.0.1 ping statistics --- 00:20:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.665 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.665 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1453895 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1453895 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1453895 ']' 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.666 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:40.666 [2024-07-25 17:00:00.853759] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:20:40.666 [2024-07-25 17:00:00.853831] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:40.927 [2024-07-25 17:00:00.948154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.928 [2024-07-25 17:00:01.055950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.928 [2024-07-25 17:00:01.056007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.928 [2024-07-25 17:00:01.056017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.928 [2024-07-25 17:00:01.056024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.928 [2024-07-25 17:00:01.056030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.928 [2024-07-25 17:00:01.056191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.928 [2024-07-25 17:00:01.056354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:40.928 [2024-07-25 17:00:01.056654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:40.928 [2024-07-25 17:00:01.056658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.497 [2024-07-25 17:00:01.683140] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.497 Malloc0 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.497 [2024-07-25 17:00:01.724633] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.497 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.498 { 00:20:41.498 "params": { 00:20:41.498 "name": "Nvme$subsystem", 00:20:41.498 "trtype": "$TEST_TRANSPORT", 00:20:41.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.498 "adrfam": "ipv4", 00:20:41.498 "trsvcid": "$NVMF_PORT", 00:20:41.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.498 "hdgst": ${hdgst:-false}, 00:20:41.498 "ddgst": ${ddgst:-false} 00:20:41.498 }, 00:20:41.498 "method": "bdev_nvme_attach_controller" 00:20:41.498 } 00:20:41.498 EOF 00:20:41.498 )") 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:41.498 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:41.498 "params": { 00:20:41.498 "name": "Nvme1", 00:20:41.498 "trtype": "tcp", 00:20:41.498 "traddr": "10.0.0.2", 00:20:41.498 "adrfam": "ipv4", 00:20:41.498 "trsvcid": "4420", 00:20:41.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.498 "hdgst": false, 00:20:41.498 "ddgst": false 00:20:41.498 }, 00:20:41.498 "method": "bdev_nvme_attach_controller" 00:20:41.498 }' 00:20:41.757 [2024-07-25 17:00:01.781144] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:20:41.757 [2024-07-25 17:00:01.781228] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1454229 ] 00:20:41.757 [2024-07-25 17:00:01.851029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.757 [2024-07-25 17:00:01.948655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.757 [2024-07-25 17:00:01.948772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.757 [2024-07-25 17:00:01.948776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.017 I/O targets: 00:20:42.017 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:42.017 00:20:42.017 00:20:42.017 CUnit - A unit testing framework for C - Version 2.1-3 00:20:42.017 http://cunit.sourceforge.net/ 00:20:42.017 00:20:42.017 00:20:42.017 Suite: bdevio tests on: Nvme1n1 00:20:42.017 Test: blockdev write read block ...passed 00:20:42.017 Test: blockdev write zeroes read block ...passed 00:20:42.017 Test: blockdev write zeroes read no split ...passed 00:20:42.287 Test: blockdev write zeroes read split ...passed 00:20:42.287 Test: blockdev write zeroes read split partial ...passed 00:20:42.287 Test: blockdev reset ...[2024-07-25 17:00:02.384709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:42.287 [2024-07-25 17:00:02.384774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f0c10 (9): Bad file descriptor 00:20:42.287 [2024-07-25 17:00:02.416370] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:42.287 passed 00:20:42.287 Test: blockdev write read 8 blocks ...passed 00:20:42.287 Test: blockdev write read size > 128k ...passed 00:20:42.287 Test: blockdev write read invalid size ...passed 00:20:42.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:42.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:42.287 Test: blockdev write read max offset ...passed 00:20:42.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:42.287 Test: blockdev writev readv 8 blocks ...passed 00:20:42.287 Test: blockdev writev readv 30 x 1block ...passed 00:20:42.562 Test: blockdev writev readv block ...passed 00:20:42.562 Test: blockdev writev readv size > 128k ...passed 00:20:42.562 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:42.562 Test: blockdev comparev and writev ...[2024-07-25 17:00:02.649478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.649504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.649515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.649524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.650160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.650168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.650177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.650183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.650849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.650856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.650871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.651518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.651525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:42.562 [2024-07-25 17:00:02.651534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.562 [2024-07-25 17:00:02.651540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:42.562 passed 00:20:42.562 Test: blockdev nvme passthru rw ...passed 00:20:42.562 Test: blockdev nvme passthru vendor specific ...[2024-07-25 17:00:02.736350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.563 [2024-07-25 17:00:02.736360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:42.563 [2024-07-25 17:00:02.736823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.563 [2024-07-25 17:00:02.736830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:42.563 [2024-07-25 17:00:02.737333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.563 [2024-07-25 17:00:02.737341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:42.563 [2024-07-25 17:00:02.737816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.563 [2024-07-25 17:00:02.737823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:42.563 passed 00:20:42.563 Test: blockdev nvme admin passthru ...passed 00:20:42.563 Test: blockdev copy ...passed 00:20:42.563 00:20:42.563 Run Summary: Type Total Ran Passed Failed Inactive 00:20:42.563 suites 1 1 n/a 0 0 00:20:42.563 tests 23 23 23 0 0 00:20:42.563 asserts 152 152 152 0 n/a 00:20:42.563 00:20:42.563 Elapsed time = 1.328 seconds 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.824 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.824 rmmod nvme_tcp 00:20:42.824 rmmod nvme_fabrics 00:20:43.085 rmmod nvme_keyring 00:20:43.085 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.085 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:43.085 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:43.085 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1453895 ']' 00:20:43.085 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1453895 00:20:43.085 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1453895 ']' 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1453895 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1453895 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1453895' 00:20:43.086 killing process with pid 1453895 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1453895 00:20:43.086 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1453895 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.347 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.931 00:20:45.931 real 0m11.786s 00:20:45.931 user 0m13.731s 00:20:45.931 sys 0m6.107s 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:45.931 ************************************ 00:20:45.931 END TEST nvmf_bdevio_no_huge 00:20:45.931 ************************************ 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:45.931 ************************************ 00:20:45.931 START TEST nvmf_tls 00:20:45.931 ************************************ 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:45.931 * Looking for test storage... 00:20:45.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.931 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.527 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:52.528 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:52.528 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:52.528 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:52.528 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.528 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:20:52.790 00:20:52.790 --- 10.0.0.2 ping statistics --- 00:20:52.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.790 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:20:52.790 00:20:52.790 --- 10.0.0.1 ping statistics --- 00:20:52.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.790 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1459053 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1459053 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1459053 ']' 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:52.790 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.790 [2024-07-25 17:00:13.034001] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:20:52.790 [2024-07-25 17:00:13.034063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.051 [2024-07-25 17:00:13.121839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.051 [2024-07-25 17:00:13.190557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.051 [2024-07-25 17:00:13.190601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.051 [2024-07-25 17:00:13.190610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.051 [2024-07-25 17:00:13.190617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.051 [2024-07-25 17:00:13.190623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.051 [2024-07-25 17:00:13.190643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:53.634 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:53.900 true 00:20:53.900 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.900 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:54.161 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:54.161 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:54.161 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:54.161 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.161 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:54.422 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:54.422 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:54.422 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:54.683 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.683 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:54.683 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:54.683 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:54.683 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.683 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:54.945 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:54.945 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:54.945 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:54.945 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.945 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:55.206 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:55.206 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:55.206 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:55.467 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.O7JSHuf82m 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Onuwh9iMUm 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.O7JSHuf82m 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Onuwh9iMUm 00:20:55.729 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:56.000 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:56.000 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.O7JSHuf82m 00:20:56.000 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O7JSHuf82m 00:20:56.000 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.268 [2024-07-25 17:00:16.404738] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.268 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.529 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.529 [2024-07-25 17:00:16.729548] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.529 [2024-07-25 17:00:16.729849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.529 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.789 malloc0 00:20:56.790 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:56.790 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O7JSHuf82m 00:20:57.071 [2024-07-25 17:00:17.185575] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.071 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.O7JSHuf82m 00:20:57.071 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.111 Initializing NVMe Controllers 00:21:07.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.111 Initialization complete. Launching workers. 00:21:07.111 ======================================================== 00:21:07.111 Latency(us) 00:21:07.111 Device Information : IOPS MiB/s Average min max 00:21:07.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19005.93 74.24 3367.39 997.08 4656.20 00:21:07.111 ======================================================== 00:21:07.111 Total : 19005.93 74.24 3367.39 997.08 4656.20 00:21:07.111 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7JSHuf82m 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O7JSHuf82m' 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1461925 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1461925 /var/tmp/bdevperf.sock 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1461925 ']' 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.111 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.111 [2024-07-25 17:00:27.371785] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:07.111 [2024-07-25 17:00:27.371845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461925 ] 00:21:07.372 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.372 [2024-07-25 17:00:27.421505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.372 [2024-07-25 17:00:27.474045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.944 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.944 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:07.944 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O7JSHuf82m 00:21:08.205 [2024-07-25 17:00:28.278869] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.205 [2024-07-25 17:00:28.278931] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.205 TLSTESTn1 00:21:08.205 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.205 Running I/O for 10 seconds... 00:21:20.473 00:21:20.473 Latency(us) 00:21:20.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.473 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.473 Verification LBA range: start 0x0 length 0x2000 00:21:20.473 TLSTESTn1 : 10.08 1929.91 7.54 0.00 0.00 66094.23 6116.69 133693.44 00:21:20.473 =================================================================================================================== 00:21:20.473 Total : 1929.91 7.54 0.00 0.00 66094.23 6116.69 133693.44 00:21:20.473 0 00:21:20.473 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.473 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1461925 00:21:20.473 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1461925 ']' 00:21:20.473 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1461925 00:21:20.473 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461925 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461925' 00:21:20.474 killing process with pid 1461925 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1461925 00:21:20.474 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.474 00:21:20.474 Latency(us) 00:21:20.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.474 =================================================================================================================== 00:21:20.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.474 [2024-07-25 17:00:38.650716] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1461925 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Onuwh9iMUm 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Onuwh9iMUm 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Onuwh9iMUm 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Onuwh9iMUm' 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1464120 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1464120 /var/tmp/bdevperf.sock 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1464120 ']' 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.474 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.474 [2024-07-25 17:00:38.814452] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:20.474 [2024-07-25 17:00:38.814506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464120 ] 00:21:20.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.474 [2024-07-25 17:00:38.864558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.474 [2024-07-25 17:00:38.915370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Onuwh9iMUm 00:21:20.474 [2024-07-25 17:00:39.740353] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.474 [2024-07-25 17:00:39.740416] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.474 [2024-07-25 17:00:39.744740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.474 [2024-07-25 17:00:39.745371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2aec0 (107): Transport endpoint is not connected 00:21:20.474 [2024-07-25 17:00:39.746364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2aec0 (9): Bad file descriptor 00:21:20.474 [2024-07-25 17:00:39.747366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:20.474 [2024-07-25 17:00:39.747373] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.474 [2024-07-25 17:00:39.747380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.474 request: 00:21:20.474 { 00:21:20.474 "name": "TLSTEST", 00:21:20.474 "trtype": "tcp", 00:21:20.474 "traddr": "10.0.0.2", 00:21:20.474 "adrfam": "ipv4", 00:21:20.474 "trsvcid": "4420", 00:21:20.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.474 "prchk_reftag": false, 00:21:20.474 "prchk_guard": false, 00:21:20.474 "hdgst": false, 00:21:20.474 "ddgst": false, 00:21:20.474 "psk": "/tmp/tmp.Onuwh9iMUm", 00:21:20.474 "method": "bdev_nvme_attach_controller", 00:21:20.474 "req_id": 1 00:21:20.474 } 00:21:20.474 Got JSON-RPC error response 00:21:20.474 response: 00:21:20.474 { 00:21:20.474 "code": -5, 00:21:20.474 "message": "Input/output error" 00:21:20.474 } 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1464120 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1464120 ']' 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1464120 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464120 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464120' 00:21:20.474 killing process with pid 1464120 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1464120 00:21:20.474 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.474 00:21:20.474 Latency(us) 00:21:20.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.474 =================================================================================================================== 00:21:20.474 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.474 [2024-07-25 17:00:39.832212] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1464120 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O7JSHuf82m 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O7JSHuf82m 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O7JSHuf82m 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O7JSHuf82m' 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1464458 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1464458 /var/tmp/bdevperf.sock 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1464458 ']' 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.474 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.474 [2024-07-25 17:00:39.990247] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:20.474 [2024-07-25 17:00:39.990301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464458 ] 00:21:20.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.474 [2024-07-25 17:00:40.041450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.474 [2024-07-25 17:00:40.094845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.O7JSHuf82m 00:21:20.735 [2024-07-25 17:00:40.908646] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.735 [2024-07-25 17:00:40.908714] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.735 [2024-07-25 17:00:40.916018] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:20.735 [2024-07-25 17:00:40.916037] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:20.735 [2024-07-25 17:00:40.916055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.735 [2024-07-25 17:00:40.916902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8ec0 (107): Transport endpoint is not connected 00:21:20.735 [2024-07-25 17:00:40.917897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8ec0 (9): Bad file descriptor 00:21:20.735 [2024-07-25 17:00:40.918899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:20.735 [2024-07-25 17:00:40.918906] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.735 [2024-07-25 17:00:40.918913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.735 request: 00:21:20.735 { 00:21:20.735 "name": "TLSTEST", 00:21:20.735 "trtype": "tcp", 00:21:20.735 "traddr": "10.0.0.2", 00:21:20.735 "adrfam": "ipv4", 00:21:20.735 "trsvcid": "4420", 00:21:20.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:20.735 "prchk_reftag": false, 00:21:20.735 "prchk_guard": false, 00:21:20.735 "hdgst": false, 00:21:20.735 "ddgst": false, 00:21:20.735 "psk": "/tmp/tmp.O7JSHuf82m", 00:21:20.735 "method": "bdev_nvme_attach_controller", 00:21:20.735 "req_id": 1 00:21:20.735 } 00:21:20.735 Got JSON-RPC error response 00:21:20.735 response: 00:21:20.735 { 00:21:20.735 "code": -5, 00:21:20.735 "message": "Input/output error" 00:21:20.735 } 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1464458 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1464458 ']' 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1464458 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.735 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464458 00:21:20.735 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:20.735 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:20.735 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464458' 00:21:20.735 killing process with pid 1464458 00:21:20.735 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1464458 00:21:20.735 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.735 00:21:20.735 Latency(us) 00:21:20.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.736 =================================================================================================================== 00:21:20.736 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.736 [2024-07-25 17:00:41.004699] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.736 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1464458 00:21:20.997 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.997 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:20.997 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.997 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7JSHuf82m 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7JSHuf82m 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7JSHuf82m 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O7JSHuf82m' 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1464601 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1464601 /var/tmp/bdevperf.sock 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1464601 ']' 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.998 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.998 [2024-07-25 17:00:41.160674] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:20.998 [2024-07-25 17:00:41.160730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464601 ] 00:21:20.998 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.998 [2024-07-25 17:00:41.210657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.998 [2024-07-25 17:00:41.262691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.259 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.259 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:21.259 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O7JSHuf82m 00:21:21.259 [2024-07-25 17:00:41.466396] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.259 [2024-07-25 17:00:41.466465] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:21.260 [2024-07-25 17:00:41.477369] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:21.260 [2024-07-25 17:00:41.477384] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:21.260 [2024-07-25 17:00:41.477402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.260 [2024-07-25 17:00:41.477708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc7ec0 (107): Transport endpoint is not connected 00:21:21.260 [2024-07-25 17:00:41.478703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc7ec0 (9): Bad file descriptor 00:21:21.260 [2024-07-25 17:00:41.479705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:21.260 [2024-07-25 17:00:41.479712] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:21.260 [2024-07-25 17:00:41.479719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:21.260 request: 00:21:21.260 { 00:21:21.260 "name": "TLSTEST", 00:21:21.260 "trtype": "tcp", 00:21:21.260 "traddr": "10.0.0.2", 00:21:21.260 "adrfam": "ipv4", 00:21:21.260 "trsvcid": "4420", 00:21:21.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.260 "prchk_reftag": false, 00:21:21.260 "prchk_guard": false, 00:21:21.260 "hdgst": false, 00:21:21.260 "ddgst": false, 00:21:21.260 "psk": "/tmp/tmp.O7JSHuf82m", 00:21:21.260 "method": "bdev_nvme_attach_controller", 00:21:21.260 "req_id": 1 00:21:21.260 } 00:21:21.260 Got JSON-RPC error response 00:21:21.260 response: 00:21:21.260 { 00:21:21.260 "code": -5, 00:21:21.260 "message": "Input/output error" 00:21:21.260 } 00:21:21.260 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1464601 00:21:21.260 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1464601 ']' 00:21:21.260 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1464601 00:21:21.260 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:21.260 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.260 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464601 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464601' 00:21:21.522 killing process with pid 1464601 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1464601 00:21:21.522 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.522 00:21:21.522 Latency(us) 00:21:21.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.522 =================================================================================================================== 00:21:21.522 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.522 [2024-07-25 17:00:41.568008] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1464601 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1464813 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1464813 /var/tmp/bdevperf.sock 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1464813 ']' 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.522 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.522 [2024-07-25 17:00:41.724339] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:21.522 [2024-07-25 17:00:41.724393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464813 ] 00:21:21.522 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.522 [2024-07-25 17:00:41.774411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.783 [2024-07-25 17:00:41.825033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.355 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.355 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:22.355 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:22.616 [2024-07-25 17:00:42.640228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:22.616 [2024-07-25 17:00:42.642558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15194a0 (9): Bad file descriptor 00:21:22.616 [2024-07-25 17:00:42.643556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.616 [2024-07-25 17:00:42.643564] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:22.616 [2024-07-25 17:00:42.643572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.616 request: 00:21:22.616 { 00:21:22.616 "name": "TLSTEST", 00:21:22.616 "trtype": "tcp", 00:21:22.616 "traddr": "10.0.0.2", 00:21:22.616 "adrfam": "ipv4", 00:21:22.616 "trsvcid": "4420", 00:21:22.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.616 "prchk_reftag": false, 00:21:22.616 "prchk_guard": false, 00:21:22.616 "hdgst": false, 00:21:22.616 "ddgst": false, 00:21:22.616 "method": "bdev_nvme_attach_controller", 00:21:22.616 "req_id": 1 00:21:22.616 } 00:21:22.616 Got JSON-RPC error response 00:21:22.616 response: 00:21:22.616 { 00:21:22.616 "code": -5, 00:21:22.616 "message": "Input/output error" 00:21:22.616 } 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1464813 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1464813 ']' 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1464813 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464813 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464813' 00:21:22.616 killing process with pid 1464813 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1464813 00:21:22.616 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.616 00:21:22.616 Latency(us) 00:21:22.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.616 =================================================================================================================== 00:21:22.616 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1464813 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1459053 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1459053 ']' 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1459053 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1459053 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1459053' 00:21:22.616 killing process with pid 1459053 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1459053 00:21:22.616 [2024-07-25 17:00:42.886197] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:22.616 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1459053 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ZWvKTCpXne 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ZWvKTCpXne 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1465050 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1465050 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1465050 ']' 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.879 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.880 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.880 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.880 [2024-07-25 17:00:43.103736] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:22.880 [2024-07-25 17:00:43.103792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.880 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.141 [2024-07-25 17:00:43.183808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.141 [2024-07-25 17:00:43.238222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.141 [2024-07-25 17:00:43.238251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.141 [2024-07-25 17:00:43.238256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.141 [2024-07-25 17:00:43.238261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.141 [2024-07-25 17:00:43.238265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.141 [2024-07-25 17:00:43.238279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ZWvKTCpXne 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZWvKTCpXne 00:21:23.712 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.973 [2024-07-25 17:00:44.036255] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.973 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:23.973 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.234 [2024-07-25 17:00:44.328964] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.234 [2024-07-25 17:00:44.329145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.234 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.234 malloc0 00:21:24.234 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.495 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:24.757 [2024-07-25 17:00:44.775956] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZWvKTCpXne 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZWvKTCpXne' 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465425 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465425 /var/tmp/bdevperf.sock 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1465425 ']' 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.757 [2024-07-25 17:00:44.822881] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:24.757 [2024-07-25 17:00:44.822929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465425 ] 00:21:24.757 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.757 [2024-07-25 17:00:44.873948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.757 [2024-07-25 17:00:44.926519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:24.757 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:25.018 [2024-07-25 17:00:45.142125] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.018 [2024-07-25 17:00:45.142183] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:25.018 TLSTESTn1 00:21:25.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:25.393 Running I/O for 10 seconds... 00:21:35.424 00:21:35.424 Latency(us) 00:21:35.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.424 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.424 Verification LBA range: start 0x0 length 0x2000 00:21:35.424 TLSTESTn1 : 10.08 1978.98 7.73 0.00 0.00 64446.92 6144.00 138936.32 00:21:35.424 =================================================================================================================== 00:21:35.424 Total : 1978.98 7.73 0.00 0.00 64446.92 6144.00 138936.32 00:21:35.424 0 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1465425 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1465425 ']' 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1465425 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465425 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465425' 00:21:35.424 killing process with pid 1465425 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1465425 00:21:35.424 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.424 00:21:35.424 Latency(us) 00:21:35.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.424 =================================================================================================================== 00:21:35.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.424 [2024-07-25 17:00:55.518400] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1465425 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ZWvKTCpXne 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZWvKTCpXne 00:21:35.424 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZWvKTCpXne 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZWvKTCpXne 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZWvKTCpXne' 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1467558 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1467558 /var/tmp/bdevperf.sock 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1467558 ']' 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.425 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 [2024-07-25 17:00:55.700119] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:35.686 [2024-07-25 17:00:55.700190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467558 ] 00:21:35.686 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.686 [2024-07-25 17:00:55.749034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.686 [2024-07-25 17:00:55.801157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.259 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.259 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:36.259 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:36.520 [2024-07-25 17:00:56.605993] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.520 [2024-07-25 17:00:56.606032] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:36.520 [2024-07-25 17:00:56.606038] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ZWvKTCpXne 00:21:36.520 request: 00:21:36.520 { 00:21:36.520 "name": "TLSTEST", 00:21:36.520 "trtype": "tcp", 00:21:36.520 "traddr": "10.0.0.2", 00:21:36.520 "adrfam": "ipv4", 00:21:36.520 "trsvcid": "4420", 00:21:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.520 "prchk_reftag": false, 00:21:36.520 "prchk_guard": false, 00:21:36.520 "hdgst": false, 00:21:36.520 "ddgst": false, 00:21:36.520 "psk": "/tmp/tmp.ZWvKTCpXne", 00:21:36.520 "method": "bdev_nvme_attach_controller", 00:21:36.520 "req_id": 1 00:21:36.520 } 00:21:36.520 Got JSON-RPC error response 00:21:36.520 response: 00:21:36.520 { 00:21:36.520 "code": -1, 00:21:36.520 "message": "Operation not permitted" 00:21:36.520 } 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1467558 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1467558 ']' 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1467558 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467558 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467558' 00:21:36.520 killing process with pid 1467558 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1467558 00:21:36.520 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.520 00:21:36.520 Latency(us) 00:21:36.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.520 =================================================================================================================== 00:21:36.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1467558 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1465050 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1465050 ']' 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1465050 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.520 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465050 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465050' 00:21:36.782 killing process with pid 1465050 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1465050 00:21:36.782 [2024-07-25 17:00:56.836842] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1465050 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.782 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1467816 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1467816 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1467816 ']' 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:36.783 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.783 [2024-07-25 17:00:57.015406] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:36.783 [2024-07-25 17:00:57.015461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.783 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.045 [2024-07-25 17:00:57.098452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.045 [2024-07-25 17:00:57.152638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.045 [2024-07-25 17:00:57.152672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.045 [2024-07-25 17:00:57.152677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.045 [2024-07-25 17:00:57.152682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.045 [2024-07-25 17:00:57.152686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.045 [2024-07-25 17:00:57.152704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ZWvKTCpXne 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZWvKTCpXne 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ZWvKTCpXne 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZWvKTCpXne 00:21:37.633 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:37.899 [2024-07-25 17:00:57.954386] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.899 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:37.899 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:38.160 [2024-07-25 17:00:58.259137] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.160 [2024-07-25 17:00:58.259323] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.160 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:38.160 malloc0 00:21:38.160 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:38.422 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:38.684 [2024-07-25 17:00:58.722217] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:38.684 [2024-07-25 17:00:58.722237] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:38.684 [2024-07-25 17:00:58.722257] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:38.684 request: 00:21:38.684 { 00:21:38.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.684 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.684 "psk": "/tmp/tmp.ZWvKTCpXne", 00:21:38.684 "method": "nvmf_subsystem_add_host", 00:21:38.684 "req_id": 1 00:21:38.684 } 00:21:38.684 Got JSON-RPC error response 00:21:38.684 response: 00:21:38.684 { 00:21:38.684 "code": -32603, 00:21:38.684 "message": "Internal error" 00:21:38.684 } 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1467816 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1467816 ']' 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1467816 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467816 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467816' 00:21:38.684 killing process with pid 1467816 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1467816 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1467816 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ZWvKTCpXne 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1468271 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1468271 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1468271 ']' 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.684 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.946 [2024-07-25 17:00:58.978159] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:38.946 [2024-07-25 17:00:58.978222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.946 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.946 [2024-07-25 17:00:59.060461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.946 [2024-07-25 17:00:59.114613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.946 [2024-07-25 17:00:59.114644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.946 [2024-07-25 17:00:59.114650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.946 [2024-07-25 17:00:59.114654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.946 [2024-07-25 17:00:59.114658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.946 [2024-07-25 17:00:59.114672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ZWvKTCpXne 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZWvKTCpXne 00:21:39.519 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.780 [2024-07-25 17:00:59.916485] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.780 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:40.041 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:40.041 [2024-07-25 17:01:00.213220] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.041 [2024-07-25 17:01:00.213402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.041 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:40.302 malloc0 00:21:40.302 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.302 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:40.563 [2024-07-25 17:01:00.648131] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1468634 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1468634 /var/tmp/bdevperf.sock 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1468634 ']' 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.563 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.563 [2024-07-25 17:01:00.711683] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:40.563 [2024-07-25 17:01:00.711733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468634 ] 00:21:40.563 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.563 [2024-07-25 17:01:00.760562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.563 [2024-07-25 17:01:00.812622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.505 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.505 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.505 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:41.505 [2024-07-25 17:01:01.581360] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.505 [2024-07-25 17:01:01.581420] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.505 TLSTESTn1 00:21:41.505 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:41.767 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:41.767 "subsystems": [ 00:21:41.767 { 00:21:41.767 "subsystem": "keyring", 00:21:41.767 "config": [] 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "subsystem": "iobuf", 00:21:41.767 "config": [ 00:21:41.767 { 00:21:41.767 "method": "iobuf_set_options", 00:21:41.767 "params": { 00:21:41.767 "small_pool_count": 8192, 00:21:41.767 "large_pool_count": 1024, 00:21:41.767 "small_bufsize": 8192, 00:21:41.767 "large_bufsize": 135168 00:21:41.767 } 00:21:41.767 } 00:21:41.767 ] 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "subsystem": "sock", 00:21:41.767 "config": [ 00:21:41.767 { 00:21:41.767 "method": "sock_set_default_impl", 00:21:41.767 "params": { 00:21:41.767 "impl_name": "posix" 00:21:41.767 } 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "method": "sock_impl_set_options", 00:21:41.767 "params": { 00:21:41.767 "impl_name": "ssl", 00:21:41.767 "recv_buf_size": 4096, 00:21:41.767 "send_buf_size": 4096, 00:21:41.767 "enable_recv_pipe": true, 00:21:41.767 "enable_quickack": false, 00:21:41.767 "enable_placement_id": 0, 00:21:41.767 "enable_zerocopy_send_server": true, 00:21:41.767 "enable_zerocopy_send_client": false, 00:21:41.767 "zerocopy_threshold": 0, 00:21:41.767 "tls_version": 0, 00:21:41.767 "enable_ktls": false 00:21:41.767 } 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "method": "sock_impl_set_options", 00:21:41.767 "params": { 00:21:41.767 "impl_name": "posix", 00:21:41.767 "recv_buf_size": 2097152, 00:21:41.767 "send_buf_size": 2097152, 00:21:41.767 "enable_recv_pipe": true, 00:21:41.767 "enable_quickack": false, 00:21:41.767 "enable_placement_id": 0, 00:21:41.767 "enable_zerocopy_send_server": true, 00:21:41.767 "enable_zerocopy_send_client": false, 00:21:41.767 "zerocopy_threshold": 0, 00:21:41.767 "tls_version": 0, 00:21:41.767 "enable_ktls": false 00:21:41.767 } 00:21:41.767 } 00:21:41.767 ] 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "subsystem": "vmd", 00:21:41.767 "config": [] 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "subsystem": "accel", 00:21:41.767 "config": [ 00:21:41.767 { 00:21:41.767 "method": "accel_set_options", 00:21:41.767 "params": { 00:21:41.767 "small_cache_size": 128, 00:21:41.767 "large_cache_size": 16, 00:21:41.767 "task_count": 2048, 00:21:41.767 "sequence_count": 2048, 00:21:41.767 "buf_count": 2048 00:21:41.767 } 00:21:41.767 } 00:21:41.767 ] 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "subsystem": "bdev", 00:21:41.767 "config": [ 00:21:41.767 { 00:21:41.767 "method": "bdev_set_options", 00:21:41.767 "params": { 00:21:41.767 "bdev_io_pool_size": 65535, 00:21:41.767 "bdev_io_cache_size": 256, 00:21:41.767 "bdev_auto_examine": true, 00:21:41.767 "iobuf_small_cache_size": 128, 00:21:41.767 "iobuf_large_cache_size": 16 00:21:41.767 } 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "method": "bdev_raid_set_options", 00:21:41.767 "params": { 00:21:41.767 "process_window_size_kb": 1024, 00:21:41.767 "process_max_bandwidth_mb_sec": 0 00:21:41.767 } 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "method": "bdev_iscsi_set_options", 00:21:41.767 "params": { 00:21:41.767 "timeout_sec": 30 00:21:41.767 } 00:21:41.767 }, 00:21:41.767 { 00:21:41.767 "method": "bdev_nvme_set_options", 00:21:41.767 "params": { 00:21:41.767 "action_on_timeout": "none", 00:21:41.767 "timeout_us": 0, 00:21:41.767 "timeout_admin_us": 0, 00:21:41.768 "keep_alive_timeout_ms": 10000, 00:21:41.768 "arbitration_burst": 0, 00:21:41.768 "low_priority_weight": 0, 00:21:41.768 "medium_priority_weight": 0, 00:21:41.768 "high_priority_weight": 0, 00:21:41.768 "nvme_adminq_poll_period_us": 10000, 00:21:41.768 "nvme_ioq_poll_period_us": 0, 00:21:41.768 "io_queue_requests": 0, 00:21:41.768 "delay_cmd_submit": true, 00:21:41.768 "transport_retry_count": 4, 00:21:41.768 "bdev_retry_count": 3, 00:21:41.768 "transport_ack_timeout": 0, 00:21:41.768 "ctrlr_loss_timeout_sec": 0, 00:21:41.768 "reconnect_delay_sec": 0, 00:21:41.768 "fast_io_fail_timeout_sec": 0, 00:21:41.768 "disable_auto_failback": false, 00:21:41.768 "generate_uuids": false, 00:21:41.768 "transport_tos": 0, 00:21:41.768 "nvme_error_stat": false, 00:21:41.768 "rdma_srq_size": 0, 00:21:41.768 "io_path_stat": false, 00:21:41.768 "allow_accel_sequence": false, 00:21:41.768 "rdma_max_cq_size": 0, 00:21:41.768 "rdma_cm_event_timeout_ms": 0, 00:21:41.768 "dhchap_digests": [ 00:21:41.768 "sha256", 00:21:41.768 "sha384", 00:21:41.768 "sha512" 00:21:41.768 ], 00:21:41.768 "dhchap_dhgroups": [ 00:21:41.768 "null", 00:21:41.768 "ffdhe2048", 00:21:41.768 "ffdhe3072", 00:21:41.768 "ffdhe4096", 00:21:41.768 "ffdhe6144", 00:21:41.768 "ffdhe8192" 00:21:41.768 ] 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "bdev_nvme_set_hotplug", 00:21:41.768 "params": { 00:21:41.768 "period_us": 100000, 00:21:41.768 "enable": false 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "bdev_malloc_create", 00:21:41.768 "params": { 00:21:41.768 "name": "malloc0", 00:21:41.768 "num_blocks": 8192, 00:21:41.768 "block_size": 4096, 00:21:41.768 "physical_block_size": 4096, 00:21:41.768 "uuid": "50a03550-ec56-4818-b34b-59604d6362da", 00:21:41.768 "optimal_io_boundary": 0, 00:21:41.768 "md_size": 0, 00:21:41.768 "dif_type": 0, 00:21:41.768 "dif_is_head_of_md": false, 00:21:41.768 "dif_pi_format": 0 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "bdev_wait_for_examine" 00:21:41.768 } 00:21:41.768 ] 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "subsystem": "nbd", 00:21:41.768 "config": [] 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "subsystem": "scheduler", 00:21:41.768 "config": [ 00:21:41.768 { 00:21:41.768 "method": "framework_set_scheduler", 00:21:41.768 "params": { 00:21:41.768 "name": "static" 00:21:41.768 } 00:21:41.768 } 00:21:41.768 ] 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "subsystem": "nvmf", 00:21:41.768 "config": [ 00:21:41.768 { 00:21:41.768 "method": "nvmf_set_config", 00:21:41.768 "params": { 00:21:41.768 "discovery_filter": "match_any", 00:21:41.768 "admin_cmd_passthru": { 00:21:41.768 "identify_ctrlr": false 00:21:41.768 } 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "nvmf_set_max_subsystems", 00:21:41.768 "params": { 00:21:41.768 "max_subsystems": 1024 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "nvmf_set_crdt", 00:21:41.768 "params": { 00:21:41.768 "crdt1": 0, 00:21:41.768 "crdt2": 0, 00:21:41.768 "crdt3": 0 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "nvmf_create_transport", 00:21:41.768 "params": { 00:21:41.768 "trtype": "TCP", 00:21:41.768 "max_queue_depth": 128, 00:21:41.768 "max_io_qpairs_per_ctrlr": 127, 00:21:41.768 "in_capsule_data_size": 4096, 00:21:41.768 "max_io_size": 131072, 00:21:41.768 "io_unit_size": 131072, 00:21:41.768 "max_aq_depth": 128, 00:21:41.768 "num_shared_buffers": 511, 00:21:41.768 "buf_cache_size": 4294967295, 00:21:41.768 "dif_insert_or_strip": false, 00:21:41.768 "zcopy": false, 00:21:41.768 "c2h_success": false, 00:21:41.768 "sock_priority": 0, 00:21:41.768 "abort_timeout_sec": 1, 00:21:41.768 "ack_timeout": 0, 00:21:41.768 "data_wr_pool_size": 0 00:21:41.768 } 00:21:41.768 }, 00:21:41.768 { 00:21:41.768 "method": "nvmf_create_subsystem", 00:21:41.768 "params": { 00:21:41.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.768 "allow_any_host": false, 00:21:41.768 "serial_number": "SPDK00000000000001", 00:21:41.768 "model_number": "SPDK bdev Controller", 00:21:41.768 "max_namespaces": 10, 00:21:41.768 "min_cntlid": 1, 00:21:41.768 "max_cntlid": 65519, 00:21:41.768 "ana_reporting": false 00:21:41.768 } 00:21:41.769 }, 00:21:41.769 { 00:21:41.769 "method": "nvmf_subsystem_add_host", 00:21:41.769 "params": { 00:21:41.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.769 "host": "nqn.2016-06.io.spdk:host1", 00:21:41.769 "psk": "/tmp/tmp.ZWvKTCpXne" 00:21:41.769 } 00:21:41.769 }, 00:21:41.769 { 00:21:41.769 "method": "nvmf_subsystem_add_ns", 00:21:41.769 "params": { 00:21:41.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.769 "namespace": { 00:21:41.769 "nsid": 1, 00:21:41.769 "bdev_name": "malloc0", 00:21:41.769 "nguid": "50A03550EC564818B34B59604D6362DA", 00:21:41.769 "uuid": "50a03550-ec56-4818-b34b-59604d6362da", 00:21:41.769 "no_auto_visible": false 00:21:41.769 } 00:21:41.769 } 00:21:41.769 }, 00:21:41.769 { 00:21:41.769 "method": "nvmf_subsystem_add_listener", 00:21:41.769 "params": { 00:21:41.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.769 "listen_address": { 00:21:41.769 "trtype": "TCP", 00:21:41.769 "adrfam": "IPv4", 00:21:41.769 "traddr": "10.0.0.2", 00:21:41.769 "trsvcid": "4420" 00:21:41.769 }, 00:21:41.769 "secure_channel": true 00:21:41.769 } 00:21:41.769 } 00:21:41.769 ] 00:21:41.769 } 00:21:41.769 ] 00:21:41.769 }' 00:21:41.769 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:42.031 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:42.031 "subsystems": [ 00:21:42.031 { 00:21:42.031 "subsystem": "keyring", 00:21:42.031 "config": [] 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "subsystem": "iobuf", 00:21:42.031 "config": [ 00:21:42.031 { 00:21:42.031 "method": "iobuf_set_options", 00:21:42.031 "params": { 00:21:42.031 "small_pool_count": 8192, 00:21:42.031 "large_pool_count": 1024, 00:21:42.031 "small_bufsize": 8192, 00:21:42.031 "large_bufsize": 135168 00:21:42.031 } 00:21:42.031 } 00:21:42.031 ] 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "subsystem": "sock", 00:21:42.031 "config": [ 00:21:42.031 { 00:21:42.031 "method": "sock_set_default_impl", 00:21:42.031 "params": { 00:21:42.031 "impl_name": "posix" 00:21:42.031 } 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "method": "sock_impl_set_options", 00:21:42.031 "params": { 00:21:42.031 "impl_name": "ssl", 00:21:42.031 "recv_buf_size": 4096, 00:21:42.031 "send_buf_size": 4096, 00:21:42.031 "enable_recv_pipe": true, 00:21:42.031 "enable_quickack": false, 00:21:42.031 "enable_placement_id": 0, 00:21:42.031 "enable_zerocopy_send_server": true, 00:21:42.031 "enable_zerocopy_send_client": false, 00:21:42.031 "zerocopy_threshold": 0, 00:21:42.031 "tls_version": 0, 00:21:42.031 "enable_ktls": false 00:21:42.031 } 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "method": "sock_impl_set_options", 00:21:42.031 "params": { 00:21:42.031 "impl_name": "posix", 00:21:42.031 "recv_buf_size": 2097152, 00:21:42.031 "send_buf_size": 2097152, 00:21:42.031 "enable_recv_pipe": true, 00:21:42.031 "enable_quickack": false, 00:21:42.031 "enable_placement_id": 0, 00:21:42.031 "enable_zerocopy_send_server": true, 00:21:42.031 "enable_zerocopy_send_client": false, 00:21:42.031 "zerocopy_threshold": 0, 00:21:42.031 "tls_version": 0, 00:21:42.031 "enable_ktls": false 00:21:42.031 } 00:21:42.031 } 00:21:42.031 ] 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "subsystem": "vmd", 00:21:42.031 "config": [] 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "subsystem": "accel", 00:21:42.031 "config": [ 00:21:42.031 { 00:21:42.031 "method": "accel_set_options", 00:21:42.031 "params": { 00:21:42.031 "small_cache_size": 128, 00:21:42.031 "large_cache_size": 16, 00:21:42.031 "task_count": 2048, 00:21:42.031 "sequence_count": 2048, 00:21:42.031 "buf_count": 2048 00:21:42.031 } 00:21:42.031 } 00:21:42.031 ] 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "subsystem": "bdev", 00:21:42.031 "config": [ 00:21:42.031 { 00:21:42.031 "method": "bdev_set_options", 00:21:42.031 "params": { 00:21:42.031 "bdev_io_pool_size": 65535, 00:21:42.031 "bdev_io_cache_size": 256, 00:21:42.031 "bdev_auto_examine": true, 00:21:42.031 "iobuf_small_cache_size": 128, 00:21:42.031 "iobuf_large_cache_size": 16 00:21:42.031 } 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "method": "bdev_raid_set_options", 00:21:42.031 "params": { 00:21:42.031 "process_window_size_kb": 1024, 00:21:42.031 "process_max_bandwidth_mb_sec": 0 00:21:42.031 } 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "method": "bdev_iscsi_set_options", 00:21:42.031 "params": { 00:21:42.031 "timeout_sec": 30 00:21:42.031 } 00:21:42.031 }, 00:21:42.031 { 00:21:42.031 "method": "bdev_nvme_set_options", 00:21:42.031 "params": { 00:21:42.031 "action_on_timeout": "none", 00:21:42.031 "timeout_us": 0, 00:21:42.031 "timeout_admin_us": 0, 00:21:42.031 "keep_alive_timeout_ms": 10000, 00:21:42.031 "arbitration_burst": 0, 00:21:42.031 "low_priority_weight": 0, 00:21:42.031 "medium_priority_weight": 0, 00:21:42.031 "high_priority_weight": 0, 00:21:42.031 "nvme_adminq_poll_period_us": 10000, 00:21:42.031 "nvme_ioq_poll_period_us": 0, 00:21:42.031 "io_queue_requests": 512, 00:21:42.031 "delay_cmd_submit": true, 00:21:42.031 "transport_retry_count": 4, 00:21:42.031 "bdev_retry_count": 3, 00:21:42.031 "transport_ack_timeout": 0, 00:21:42.031 "ctrlr_loss_timeout_sec": 0, 00:21:42.031 "reconnect_delay_sec": 0, 00:21:42.031 "fast_io_fail_timeout_sec": 0, 00:21:42.032 "disable_auto_failback": false, 00:21:42.032 "generate_uuids": false, 00:21:42.032 "transport_tos": 0, 00:21:42.032 "nvme_error_stat": false, 00:21:42.032 "rdma_srq_size": 0, 00:21:42.032 "io_path_stat": false, 00:21:42.032 "allow_accel_sequence": false, 00:21:42.032 "rdma_max_cq_size": 0, 00:21:42.032 "rdma_cm_event_timeout_ms": 0, 00:21:42.032 "dhchap_digests": [ 00:21:42.032 "sha256", 00:21:42.032 "sha384", 00:21:42.032 "sha512" 00:21:42.032 ], 00:21:42.032 "dhchap_dhgroups": [ 00:21:42.032 "null", 00:21:42.032 "ffdhe2048", 00:21:42.032 "ffdhe3072", 00:21:42.032 "ffdhe4096", 00:21:42.032 "ffdhe6144", 00:21:42.032 "ffdhe8192" 00:21:42.032 ] 00:21:42.032 } 00:21:42.032 }, 00:21:42.032 { 00:21:42.032 "method": "bdev_nvme_attach_controller", 00:21:42.032 "params": { 00:21:42.032 "name": "TLSTEST", 00:21:42.032 "trtype": "TCP", 00:21:42.032 "adrfam": "IPv4", 00:21:42.032 "traddr": "10.0.0.2", 00:21:42.032 "trsvcid": "4420", 00:21:42.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.032 "prchk_reftag": false, 00:21:42.032 "prchk_guard": false, 00:21:42.032 "ctrlr_loss_timeout_sec": 0, 00:21:42.032 "reconnect_delay_sec": 0, 00:21:42.032 "fast_io_fail_timeout_sec": 0, 00:21:42.032 "psk": "/tmp/tmp.ZWvKTCpXne", 00:21:42.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.032 "hdgst": false, 00:21:42.032 "ddgst": false 00:21:42.032 } 00:21:42.032 }, 00:21:42.032 { 00:21:42.032 "method": "bdev_nvme_set_hotplug", 00:21:42.032 "params": { 00:21:42.032 "period_us": 100000, 00:21:42.032 "enable": false 00:21:42.032 } 00:21:42.032 }, 00:21:42.032 { 00:21:42.032 "method": "bdev_wait_for_examine" 00:21:42.032 } 00:21:42.032 ] 00:21:42.032 }, 00:21:42.032 { 00:21:42.032 "subsystem": "nbd", 00:21:42.032 "config": [] 00:21:42.032 } 00:21:42.032 ] 00:21:42.032 }' 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1468634 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1468634 ']' 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1468634 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468634 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468634' 00:21:42.032 killing process with pid 1468634 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1468634 00:21:42.032 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.032 00:21:42.032 Latency(us) 00:21:42.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.032 =================================================================================================================== 00:21:42.032 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.032 [2024-07-25 17:01:02.239000] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:42.032 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1468634 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1468271 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1468271 ']' 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1468271 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468271 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468271' 00:21:42.293 killing process with pid 1468271 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1468271 00:21:42.293 [2024-07-25 17:01:02.404437] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1468271 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.293 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:42.293 "subsystems": [ 00:21:42.293 { 00:21:42.293 "subsystem": "keyring", 00:21:42.293 "config": [] 00:21:42.293 }, 00:21:42.293 { 00:21:42.293 "subsystem": "iobuf", 00:21:42.293 "config": [ 00:21:42.293 { 00:21:42.293 "method": "iobuf_set_options", 00:21:42.293 "params": { 00:21:42.293 "small_pool_count": 8192, 00:21:42.293 "large_pool_count": 1024, 00:21:42.293 "small_bufsize": 8192, 00:21:42.293 "large_bufsize": 135168 00:21:42.293 } 00:21:42.293 } 00:21:42.293 ] 00:21:42.293 }, 00:21:42.293 { 00:21:42.293 "subsystem": "sock", 00:21:42.293 "config": [ 00:21:42.293 { 00:21:42.293 "method": "sock_set_default_impl", 00:21:42.293 "params": { 00:21:42.293 "impl_name": "posix" 00:21:42.293 } 00:21:42.293 }, 00:21:42.293 { 00:21:42.293 "method": "sock_impl_set_options", 00:21:42.293 "params": { 00:21:42.293 "impl_name": "ssl", 00:21:42.293 "recv_buf_size": 4096, 00:21:42.293 "send_buf_size": 4096, 00:21:42.293 "enable_recv_pipe": true, 00:21:42.293 "enable_quickack": false, 00:21:42.293 "enable_placement_id": 0, 00:21:42.293 "enable_zerocopy_send_server": true, 00:21:42.293 "enable_zerocopy_send_client": false, 00:21:42.294 "zerocopy_threshold": 0, 00:21:42.294 "tls_version": 0, 00:21:42.294 "enable_ktls": false 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "sock_impl_set_options", 00:21:42.294 "params": { 00:21:42.294 "impl_name": "posix", 00:21:42.294 "recv_buf_size": 2097152, 00:21:42.294 "send_buf_size": 2097152, 00:21:42.294 "enable_recv_pipe": true, 00:21:42.294 "enable_quickack": false, 00:21:42.294 "enable_placement_id": 0, 00:21:42.294 "enable_zerocopy_send_server": true, 00:21:42.294 "enable_zerocopy_send_client": false, 00:21:42.294 "zerocopy_threshold": 0, 00:21:42.294 "tls_version": 0, 00:21:42.294 "enable_ktls": false 00:21:42.294 } 00:21:42.294 } 00:21:42.294 ] 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "subsystem": "vmd", 00:21:42.294 "config": [] 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "subsystem": "accel", 00:21:42.294 "config": [ 00:21:42.294 { 00:21:42.294 "method": "accel_set_options", 00:21:42.294 "params": { 00:21:42.294 "small_cache_size": 128, 00:21:42.294 "large_cache_size": 16, 00:21:42.294 "task_count": 2048, 00:21:42.294 "sequence_count": 2048, 00:21:42.294 "buf_count": 2048 00:21:42.294 } 00:21:42.294 } 00:21:42.294 ] 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "subsystem": "bdev", 00:21:42.294 "config": [ 00:21:42.294 { 00:21:42.294 "method": "bdev_set_options", 00:21:42.294 "params": { 00:21:42.294 "bdev_io_pool_size": 65535, 00:21:42.294 "bdev_io_cache_size": 256, 00:21:42.294 "bdev_auto_examine": true, 00:21:42.294 "iobuf_small_cache_size": 128, 00:21:42.294 "iobuf_large_cache_size": 16 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "bdev_raid_set_options", 00:21:42.294 "params": { 00:21:42.294 "process_window_size_kb": 1024, 00:21:42.294 "process_max_bandwidth_mb_sec": 0 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "bdev_iscsi_set_options", 00:21:42.294 "params": { 00:21:42.294 "timeout_sec": 30 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "bdev_nvme_set_options", 00:21:42.294 "params": { 00:21:42.294 "action_on_timeout": "none", 00:21:42.294 "timeout_us": 0, 00:21:42.294 "timeout_admin_us": 0, 00:21:42.294 "keep_alive_timeout_ms": 10000, 00:21:42.294 "arbitration_burst": 0, 00:21:42.294 "low_priority_weight": 0, 00:21:42.294 "medium_priority_weight": 0, 00:21:42.294 "high_priority_weight": 0, 00:21:42.294 "nvme_adminq_poll_period_us": 10000, 00:21:42.294 "nvme_ioq_poll_period_us": 0, 00:21:42.294 "io_queue_requests": 0, 00:21:42.294 "delay_cmd_submit": true, 00:21:42.294 "transport_retry_count": 4, 00:21:42.294 "bdev_retry_count": 3, 00:21:42.294 "transport_ack_timeout": 0, 00:21:42.294 "ctrlr_loss_timeout_sec": 0, 00:21:42.294 "reconnect_delay_sec": 0, 00:21:42.294 "fast_io_fail_timeout_sec": 0, 00:21:42.294 "disable_auto_failback": false, 00:21:42.294 "generate_uuids": false, 00:21:42.294 "transport_tos": 0, 00:21:42.294 "nvme_error_stat": false, 00:21:42.294 "rdma_srq_size": 0, 00:21:42.294 "io_path_stat": false, 00:21:42.294 "allow_accel_sequence": false, 00:21:42.294 "rdma_max_cq_size": 0, 00:21:42.294 "rdma_cm_event_timeout_ms": 0, 00:21:42.294 "dhchap_digests": [ 00:21:42.294 "sha256", 00:21:42.294 "sha384", 00:21:42.294 "sha512" 00:21:42.294 ], 00:21:42.294 "dhchap_dhgroups": [ 00:21:42.294 "null", 00:21:42.294 "ffdhe2048", 00:21:42.294 "ffdhe3072", 00:21:42.294 "ffdhe4096", 00:21:42.294 "ffdhe6144", 00:21:42.294 "ffdhe8192" 00:21:42.294 ] 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "bdev_nvme_set_hotplug", 00:21:42.294 "params": { 00:21:42.294 "period_us": 100000, 00:21:42.294 "enable": false 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "bdev_malloc_create", 00:21:42.294 "params": { 00:21:42.294 "name": "malloc0", 00:21:42.294 "num_blocks": 8192, 00:21:42.294 "block_size": 4096, 00:21:42.294 "physical_block_size": 4096, 00:21:42.294 "uuid": "50a03550-ec56-4818-b34b-59604d6362da", 00:21:42.294 "optimal_io_boundary": 0, 00:21:42.294 "md_size": 0, 00:21:42.294 "dif_type": 0, 00:21:42.294 "dif_is_head_of_md": false, 00:21:42.294 "dif_pi_format": 0 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "bdev_wait_for_examine" 00:21:42.294 } 00:21:42.294 ] 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "subsystem": "nbd", 00:21:42.294 "config": [] 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "subsystem": "scheduler", 00:21:42.294 "config": [ 00:21:42.294 { 00:21:42.294 "method": "framework_set_scheduler", 00:21:42.294 "params": { 00:21:42.294 "name": "static" 00:21:42.294 } 00:21:42.294 } 00:21:42.294 ] 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "subsystem": "nvmf", 00:21:42.294 "config": [ 00:21:42.294 { 00:21:42.294 "method": "nvmf_set_config", 00:21:42.294 "params": { 00:21:42.294 "discovery_filter": "match_any", 00:21:42.294 "admin_cmd_passthru": { 00:21:42.294 "identify_ctrlr": false 00:21:42.294 } 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "nvmf_set_max_subsystems", 00:21:42.294 "params": { 00:21:42.294 "max_subsystems": 1024 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "nvmf_set_crdt", 00:21:42.294 "params": { 00:21:42.294 "crdt1": 0, 00:21:42.294 "crdt2": 0, 00:21:42.294 "crdt3": 0 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "nvmf_create_transport", 00:21:42.294 "params": { 00:21:42.294 "trtype": "TCP", 00:21:42.294 "max_queue_depth": 128, 00:21:42.294 "max_io_qpairs_per_ctrlr": 127, 00:21:42.294 "in_capsule_data_size": 4096, 00:21:42.294 "max_io_size": 131072, 00:21:42.294 "io_unit_size": 131072, 00:21:42.294 "max_aq_depth": 128, 00:21:42.294 "num_shared_buffers": 511, 00:21:42.294 "buf_cache_size": 4294967295, 00:21:42.294 "dif_insert_or_strip": false, 00:21:42.294 "zcopy": false, 00:21:42.294 "c2h_success": false, 00:21:42.294 "sock_priority": 0, 00:21:42.294 "abort_timeout_sec": 1, 00:21:42.294 "ack_timeout": 0, 00:21:42.294 "data_wr_pool_size": 0 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "nvmf_create_subsystem", 00:21:42.294 "params": { 00:21:42.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.294 "allow_any_host": false, 00:21:42.294 "serial_number": "SPDK00000000000001", 00:21:42.294 "model_number": "SPDK bdev Controller", 00:21:42.294 "max_namespaces": 10, 00:21:42.294 "min_cntlid": 1, 00:21:42.294 "max_cntlid": 65519, 00:21:42.294 "ana_reporting": false 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "nvmf_subsystem_add_host", 00:21:42.294 "params": { 00:21:42.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.294 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.294 "psk": "/tmp/tmp.ZWvKTCpXne" 00:21:42.294 } 00:21:42.294 }, 00:21:42.294 { 00:21:42.294 "method": "nvmf_subsystem_add_ns", 00:21:42.294 "params": { 00:21:42.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.294 "namespace": { 00:21:42.294 "nsid": 1, 00:21:42.294 "bdev_name": "malloc0", 00:21:42.294 "nguid": "50A03550EC564818B34B59604D6362DA", 00:21:42.295 "uuid": "50a03550-ec56-4818-b34b-59604d6362da", 00:21:42.295 "no_auto_visible": false 00:21:42.295 } 00:21:42.295 } 00:21:42.295 }, 00:21:42.295 { 00:21:42.295 "method": "nvmf_subsystem_add_listener", 00:21:42.295 "params": { 00:21:42.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.295 "listen_address": { 00:21:42.295 "trtype": "TCP", 00:21:42.295 "adrfam": "IPv4", 00:21:42.295 "traddr": "10.0.0.2", 00:21:42.295 "trsvcid": "4420" 00:21:42.295 }, 00:21:42.295 "secure_channel": true 00:21:42.295 } 00:21:42.295 } 00:21:42.295 ] 00:21:42.295 } 00:21:42.295 ] 00:21:42.295 }' 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1468985 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1468985 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1468985 ']' 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.295 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.556 [2024-07-25 17:01:02.589474] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:42.556 [2024-07-25 17:01:02.589532] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.556 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.556 [2024-07-25 17:01:02.672692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.556 [2024-07-25 17:01:02.726240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.556 [2024-07-25 17:01:02.726270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.556 [2024-07-25 17:01:02.726275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.556 [2024-07-25 17:01:02.726279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.556 [2024-07-25 17:01:02.726283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.556 [2024-07-25 17:01:02.726322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.816 [2024-07-25 17:01:02.909455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.816 [2024-07-25 17:01:02.930772] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.816 [2024-07-25 17:01:02.946823] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.816 [2024-07-25 17:01:02.947001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.077 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.077 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:43.077 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.077 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.077 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1469031 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1469031 /var/tmp/bdevperf.sock 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1469031 ']' 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.337 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:43.337 "subsystems": [ 00:21:43.337 { 00:21:43.337 "subsystem": "keyring", 00:21:43.337 "config": [] 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "subsystem": "iobuf", 00:21:43.337 "config": [ 00:21:43.337 { 00:21:43.337 "method": "iobuf_set_options", 00:21:43.337 "params": { 00:21:43.337 "small_pool_count": 8192, 00:21:43.337 "large_pool_count": 1024, 00:21:43.337 "small_bufsize": 8192, 00:21:43.337 "large_bufsize": 135168 00:21:43.337 } 00:21:43.337 } 00:21:43.337 ] 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "subsystem": "sock", 00:21:43.337 "config": [ 00:21:43.337 { 00:21:43.337 "method": "sock_set_default_impl", 00:21:43.337 "params": { 00:21:43.337 "impl_name": "posix" 00:21:43.337 } 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "method": "sock_impl_set_options", 00:21:43.337 "params": { 00:21:43.337 "impl_name": "ssl", 00:21:43.337 "recv_buf_size": 4096, 00:21:43.337 "send_buf_size": 4096, 00:21:43.337 "enable_recv_pipe": true, 00:21:43.337 "enable_quickack": false, 00:21:43.337 "enable_placement_id": 0, 00:21:43.337 "enable_zerocopy_send_server": true, 00:21:43.337 "enable_zerocopy_send_client": false, 00:21:43.337 "zerocopy_threshold": 0, 00:21:43.337 "tls_version": 0, 00:21:43.337 "enable_ktls": false 00:21:43.337 } 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "method": "sock_impl_set_options", 00:21:43.337 "params": { 00:21:43.337 "impl_name": "posix", 00:21:43.337 "recv_buf_size": 2097152, 00:21:43.337 "send_buf_size": 2097152, 00:21:43.337 "enable_recv_pipe": true, 00:21:43.337 "enable_quickack": false, 00:21:43.337 "enable_placement_id": 0, 00:21:43.337 "enable_zerocopy_send_server": true, 00:21:43.337 "enable_zerocopy_send_client": false, 00:21:43.337 "zerocopy_threshold": 0, 00:21:43.337 "tls_version": 0, 00:21:43.337 "enable_ktls": false 00:21:43.337 } 00:21:43.337 } 00:21:43.337 ] 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "subsystem": "vmd", 00:21:43.337 "config": [] 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "subsystem": "accel", 00:21:43.337 "config": [ 00:21:43.337 { 00:21:43.337 "method": "accel_set_options", 00:21:43.337 "params": { 00:21:43.337 "small_cache_size": 128, 00:21:43.337 "large_cache_size": 16, 00:21:43.337 "task_count": 2048, 00:21:43.337 "sequence_count": 2048, 00:21:43.337 "buf_count": 2048 00:21:43.337 } 00:21:43.337 } 00:21:43.337 ] 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "subsystem": "bdev", 00:21:43.337 "config": [ 00:21:43.337 { 00:21:43.337 "method": "bdev_set_options", 00:21:43.337 "params": { 00:21:43.337 "bdev_io_pool_size": 65535, 00:21:43.337 "bdev_io_cache_size": 256, 00:21:43.337 "bdev_auto_examine": true, 00:21:43.337 "iobuf_small_cache_size": 128, 00:21:43.337 "iobuf_large_cache_size": 16 00:21:43.337 } 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "method": "bdev_raid_set_options", 00:21:43.337 "params": { 00:21:43.337 "process_window_size_kb": 1024, 00:21:43.337 "process_max_bandwidth_mb_sec": 0 00:21:43.337 } 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "method": "bdev_iscsi_set_options", 00:21:43.337 "params": { 00:21:43.337 "timeout_sec": 30 00:21:43.337 } 00:21:43.337 }, 00:21:43.337 { 00:21:43.337 "method": "bdev_nvme_set_options", 00:21:43.337 "params": { 00:21:43.337 "action_on_timeout": "none", 00:21:43.337 "timeout_us": 0, 00:21:43.337 "timeout_admin_us": 0, 00:21:43.337 "keep_alive_timeout_ms": 10000, 00:21:43.337 "arbitration_burst": 0, 00:21:43.337 "low_priority_weight": 0, 00:21:43.337 "medium_priority_weight": 0, 00:21:43.337 "high_priority_weight": 0, 00:21:43.338 "nvme_adminq_poll_period_us": 10000, 00:21:43.338 "nvme_ioq_poll_period_us": 0, 00:21:43.338 "io_queue_requests": 512, 00:21:43.338 "delay_cmd_submit": true, 00:21:43.338 "transport_retry_count": 4, 00:21:43.338 "bdev_retry_count": 3, 00:21:43.338 "transport_ack_timeout": 0, 00:21:43.338 "ctrlr_loss_timeout_sec": 0, 00:21:43.338 "reconnect_delay_sec": 0, 00:21:43.338 "fast_io_fail_timeout_sec": 0, 00:21:43.338 "disable_auto_failback": false, 00:21:43.338 "generate_uuids": false, 00:21:43.338 "transport_tos": 0, 00:21:43.338 "nvme_error_stat": false, 00:21:43.338 "rdma_srq_size": 0, 00:21:43.338 "io_path_stat": false, 00:21:43.338 "allow_accel_sequence": false, 00:21:43.338 "rdma_max_cq_size": 0, 00:21:43.338 "rdma_cm_event_timeout_ms": 0, 00:21:43.338 "dhchap_digests": [ 00:21:43.338 "sha256", 00:21:43.338 "sha384", 00:21:43.338 "sha512" 00:21:43.338 ], 00:21:43.338 "dhchap_dhgroups": [ 00:21:43.338 "null", 00:21:43.338 "ffdhe2048", 00:21:43.338 "ffdhe3072", 00:21:43.338 "ffdhe4096", 00:21:43.338 "ffdhe6144", 00:21:43.338 "ffdhe8192" 00:21:43.338 ] 00:21:43.338 } 00:21:43.338 }, 00:21:43.338 { 00:21:43.338 "method": "bdev_nvme_attach_controller", 00:21:43.338 "params": { 00:21:43.338 "name": "TLSTEST", 00:21:43.338 "trtype": "TCP", 00:21:43.338 "adrfam": "IPv4", 00:21:43.338 "traddr": "10.0.0.2", 00:21:43.338 "trsvcid": "4420", 00:21:43.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.338 "prchk_reftag": false, 00:21:43.338 "prchk_guard": false, 00:21:43.338 "ctrlr_loss_timeout_sec": 0, 00:21:43.338 "reconnect_delay_sec": 0, 00:21:43.338 "fast_io_fail_timeout_sec": 0, 00:21:43.338 "psk": "/tmp/tmp.ZWvKTCpXne", 00:21:43.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.338 "hdgst": false, 00:21:43.338 "ddgst": false 00:21:43.338 } 00:21:43.338 }, 00:21:43.338 { 00:21:43.338 "method": "bdev_nvme_set_hotplug", 00:21:43.338 "params": { 00:21:43.338 "period_us": 100000, 00:21:43.338 "enable": false 00:21:43.338 } 00:21:43.338 }, 00:21:43.338 { 00:21:43.338 "method": "bdev_wait_for_examine" 00:21:43.338 } 00:21:43.338 ] 00:21:43.338 }, 00:21:43.338 { 00:21:43.338 "subsystem": "nbd", 00:21:43.338 "config": [] 00:21:43.338 } 00:21:43.338 ] 00:21:43.338 }' 00:21:43.338 [2024-07-25 17:01:03.424940] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:43.338 [2024-07-25 17:01:03.424990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469031 ] 00:21:43.338 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.338 [2024-07-25 17:01:03.474089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.338 [2024-07-25 17:01:03.526482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.598 [2024-07-25 17:01:03.650975] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.598 [2024-07-25 17:01:03.651042] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.169 Running I/O for 10 seconds... 00:21:54.172 00:21:54.172 Latency(us) 00:21:54.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.172 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:54.172 Verification LBA range: start 0x0 length 0x2000 00:21:54.172 TLSTESTn1 : 10.07 2046.52 7.99 0.00 0.00 62335.54 4942.51 151169.71 00:21:54.172 =================================================================================================================== 00:21:54.172 Total : 2046.52 7.99 0.00 0.00 62335.54 4942.51 151169.71 00:21:54.172 0 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1469031 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1469031 ']' 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1469031 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469031 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469031' 00:21:54.172 killing process with pid 1469031 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1469031 00:21:54.172 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.172 00:21:54.172 Latency(us) 00:21:54.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.172 =================================================================================================================== 00:21:54.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.172 [2024-07-25 17:01:14.441843] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:54.172 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1469031 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1468985 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1468985 ']' 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1468985 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468985 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:54.433 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468985' 00:21:54.434 killing process with pid 1468985 00:21:54.434 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1468985 00:21:54.434 [2024-07-25 17:01:14.609728] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:54.434 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1468985 00:21:54.695 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:54.695 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.695 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.695 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.695 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1471359 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1471359 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1471359 ']' 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.696 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.696 [2024-07-25 17:01:14.785777] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:54.696 [2024-07-25 17:01:14.785834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.696 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.696 [2024-07-25 17:01:14.854639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.696 [2024-07-25 17:01:14.919780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.696 [2024-07-25 17:01:14.919817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.696 [2024-07-25 17:01:14.919824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.696 [2024-07-25 17:01:14.919830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.696 [2024-07-25 17:01:14.919836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.696 [2024-07-25 17:01:14.919860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ZWvKTCpXne 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZWvKTCpXne 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:55.640 [2024-07-25 17:01:15.727045] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:55.640 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:55.901 [2024-07-25 17:01:16.023793] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.901 [2024-07-25 17:01:16.023994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.901 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.163 malloc0 00:21:56.163 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:56.163 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZWvKTCpXne 00:21:56.424 [2024-07-25 17:01:16.479754] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1471717 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1471717 /var/tmp/bdevperf.sock 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1471717 ']' 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.424 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.424 [2024-07-25 17:01:16.551558] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:56.424 [2024-07-25 17:01:16.551613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471717 ] 00:21:56.424 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.424 [2024-07-25 17:01:16.625658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.424 [2024-07-25 17:01:16.679268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.367 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.367 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:57.367 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZWvKTCpXne 00:21:57.367 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:57.367 [2024-07-25 17:01:17.573235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.629 nvme0n1 00:21:57.629 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:57.629 Running I/O for 1 seconds... 00:21:59.016 00:21:59.016 Latency(us) 00:21:59.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.016 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:59.016 Verification LBA range: start 0x0 length 0x2000 00:21:59.016 nvme0n1 : 1.09 1436.56 5.61 0.00 0.00 86181.90 6253.23 145053.01 00:21:59.016 =================================================================================================================== 00:21:59.016 Total : 1436.56 5.61 0.00 0.00 86181.90 6253.23 145053.01 00:21:59.016 0 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1471717 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1471717 ']' 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1471717 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471717 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471717' 00:21:59.016 killing process with pid 1471717 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1471717 00:21:59.016 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.016 00:21:59.016 Latency(us) 00:21:59.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.016 =================================================================================================================== 00:21:59.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.016 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1471717 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1471359 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1471359 ']' 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1471359 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471359 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471359' 00:21:59.016 killing process with pid 1471359 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1471359 00:21:59.016 [2024-07-25 17:01:19.124442] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1471359 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1472234 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1472234 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1472234 ']' 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.016 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.278 [2024-07-25 17:01:19.326158] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:21:59.278 [2024-07-25 17:01:19.326222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.278 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.278 [2024-07-25 17:01:19.391776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.278 [2024-07-25 17:01:19.455835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.278 [2024-07-25 17:01:19.455873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.278 [2024-07-25 17:01:19.455880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.278 [2024-07-25 17:01:19.455886] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.278 [2024-07-25 17:01:19.455892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.278 [2024-07-25 17:01:19.455910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.850 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.850 [2024-07-25 17:01:20.122299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.112 malloc0 00:22:00.112 [2024-07-25 17:01:20.148947] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.112 [2024-07-25 17:01:20.157401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1472431 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1472431 /var/tmp/bdevperf.sock 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1472431 ']' 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.112 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.112 [2024-07-25 17:01:20.231229] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:22:00.112 [2024-07-25 17:01:20.231275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472431 ] 00:22:00.112 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.112 [2024-07-25 17:01:20.304025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.112 [2024-07-25 17:01:20.357474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.056 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.056 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.056 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZWvKTCpXne 00:22:01.056 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:01.056 [2024-07-25 17:01:21.255244] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.317 nvme0n1 00:22:01.317 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.317 Running I/O for 1 seconds... 00:22:02.307 00:22:02.307 Latency(us) 00:22:02.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.307 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:02.307 Verification LBA range: start 0x0 length 0x2000 00:22:02.307 nvme0n1 : 1.09 1575.59 6.15 0.00 0.00 78573.68 6034.77 120586.24 00:22:02.307 =================================================================================================================== 00:22:02.307 Total : 1575.59 6.15 0.00 0.00 78573.68 6034.77 120586.24 00:22:02.307 0 00:22:02.307 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:02.307 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.307 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.568 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.568 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:02.568 "subsystems": [ 00:22:02.568 { 00:22:02.568 "subsystem": "keyring", 00:22:02.568 "config": [ 00:22:02.568 { 00:22:02.568 "method": "keyring_file_add_key", 00:22:02.568 "params": { 00:22:02.568 "name": "key0", 00:22:02.568 "path": "/tmp/tmp.ZWvKTCpXne" 00:22:02.568 } 00:22:02.568 } 00:22:02.568 ] 00:22:02.568 }, 00:22:02.568 { 00:22:02.568 "subsystem": "iobuf", 00:22:02.568 "config": [ 00:22:02.568 { 00:22:02.568 "method": "iobuf_set_options", 00:22:02.568 "params": { 00:22:02.568 "small_pool_count": 8192, 00:22:02.568 "large_pool_count": 1024, 00:22:02.568 "small_bufsize": 8192, 00:22:02.568 "large_bufsize": 135168 00:22:02.568 } 00:22:02.569 } 00:22:02.569 ] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "sock", 00:22:02.569 "config": [ 00:22:02.569 { 00:22:02.569 "method": "sock_set_default_impl", 00:22:02.569 "params": { 00:22:02.569 "impl_name": "posix" 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "sock_impl_set_options", 00:22:02.569 "params": { 00:22:02.569 "impl_name": "ssl", 00:22:02.569 "recv_buf_size": 4096, 00:22:02.569 "send_buf_size": 4096, 00:22:02.569 "enable_recv_pipe": true, 00:22:02.569 "enable_quickack": false, 00:22:02.569 "enable_placement_id": 0, 00:22:02.569 "enable_zerocopy_send_server": true, 00:22:02.569 "enable_zerocopy_send_client": false, 00:22:02.569 "zerocopy_threshold": 0, 00:22:02.569 "tls_version": 0, 00:22:02.569 "enable_ktls": false 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "sock_impl_set_options", 00:22:02.569 "params": { 00:22:02.569 "impl_name": "posix", 00:22:02.569 "recv_buf_size": 2097152, 00:22:02.569 "send_buf_size": 2097152, 00:22:02.569 "enable_recv_pipe": true, 00:22:02.569 "enable_quickack": false, 00:22:02.569 "enable_placement_id": 0, 00:22:02.569 "enable_zerocopy_send_server": true, 00:22:02.569 "enable_zerocopy_send_client": false, 00:22:02.569 "zerocopy_threshold": 0, 00:22:02.569 "tls_version": 0, 00:22:02.569 "enable_ktls": false 00:22:02.569 } 00:22:02.569 } 00:22:02.569 ] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "vmd", 00:22:02.569 "config": [] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "accel", 00:22:02.569 "config": [ 00:22:02.569 { 00:22:02.569 "method": "accel_set_options", 00:22:02.569 "params": { 00:22:02.569 "small_cache_size": 128, 00:22:02.569 "large_cache_size": 16, 00:22:02.569 "task_count": 2048, 00:22:02.569 "sequence_count": 2048, 00:22:02.569 "buf_count": 2048 00:22:02.569 } 00:22:02.569 } 00:22:02.569 ] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "bdev", 00:22:02.569 "config": [ 00:22:02.569 { 00:22:02.569 "method": "bdev_set_options", 00:22:02.569 "params": { 00:22:02.569 "bdev_io_pool_size": 65535, 00:22:02.569 "bdev_io_cache_size": 256, 00:22:02.569 "bdev_auto_examine": true, 00:22:02.569 "iobuf_small_cache_size": 128, 00:22:02.569 "iobuf_large_cache_size": 16 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "bdev_raid_set_options", 00:22:02.569 "params": { 00:22:02.569 "process_window_size_kb": 1024, 00:22:02.569 "process_max_bandwidth_mb_sec": 0 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "bdev_iscsi_set_options", 00:22:02.569 "params": { 00:22:02.569 "timeout_sec": 30 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "bdev_nvme_set_options", 00:22:02.569 "params": { 00:22:02.569 "action_on_timeout": "none", 00:22:02.569 "timeout_us": 0, 00:22:02.569 "timeout_admin_us": 0, 00:22:02.569 "keep_alive_timeout_ms": 10000, 00:22:02.569 "arbitration_burst": 0, 00:22:02.569 "low_priority_weight": 0, 00:22:02.569 "medium_priority_weight": 0, 00:22:02.569 "high_priority_weight": 0, 00:22:02.569 "nvme_adminq_poll_period_us": 10000, 00:22:02.569 "nvme_ioq_poll_period_us": 0, 00:22:02.569 "io_queue_requests": 0, 00:22:02.569 "delay_cmd_submit": true, 00:22:02.569 "transport_retry_count": 4, 00:22:02.569 "bdev_retry_count": 3, 00:22:02.569 "transport_ack_timeout": 0, 00:22:02.569 "ctrlr_loss_timeout_sec": 0, 00:22:02.569 "reconnect_delay_sec": 0, 00:22:02.569 "fast_io_fail_timeout_sec": 0, 00:22:02.569 "disable_auto_failback": false, 00:22:02.569 "generate_uuids": false, 00:22:02.569 "transport_tos": 0, 00:22:02.569 "nvme_error_stat": false, 00:22:02.569 "rdma_srq_size": 0, 00:22:02.569 "io_path_stat": false, 00:22:02.569 "allow_accel_sequence": false, 00:22:02.569 "rdma_max_cq_size": 0, 00:22:02.569 "rdma_cm_event_timeout_ms": 0, 00:22:02.569 "dhchap_digests": [ 00:22:02.569 "sha256", 00:22:02.569 "sha384", 00:22:02.569 "sha512" 00:22:02.569 ], 00:22:02.569 "dhchap_dhgroups": [ 00:22:02.569 "null", 00:22:02.569 "ffdhe2048", 00:22:02.569 "ffdhe3072", 00:22:02.569 "ffdhe4096", 00:22:02.569 "ffdhe6144", 00:22:02.569 "ffdhe8192" 00:22:02.569 ] 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "bdev_nvme_set_hotplug", 00:22:02.569 "params": { 00:22:02.569 "period_us": 100000, 00:22:02.569 "enable": false 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "bdev_malloc_create", 00:22:02.569 "params": { 00:22:02.569 "name": "malloc0", 00:22:02.569 "num_blocks": 8192, 00:22:02.569 "block_size": 4096, 00:22:02.569 "physical_block_size": 4096, 00:22:02.569 "uuid": "b77b5bc7-2763-4e76-97e2-0e4a7fe7fa4a", 00:22:02.569 "optimal_io_boundary": 0, 00:22:02.569 "md_size": 0, 00:22:02.569 "dif_type": 0, 00:22:02.569 "dif_is_head_of_md": false, 00:22:02.569 "dif_pi_format": 0 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "bdev_wait_for_examine" 00:22:02.569 } 00:22:02.569 ] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "nbd", 00:22:02.569 "config": [] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "scheduler", 00:22:02.569 "config": [ 00:22:02.569 { 00:22:02.569 "method": "framework_set_scheduler", 00:22:02.569 "params": { 00:22:02.569 "name": "static" 00:22:02.569 } 00:22:02.569 } 00:22:02.569 ] 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "subsystem": "nvmf", 00:22:02.569 "config": [ 00:22:02.569 { 00:22:02.569 "method": "nvmf_set_config", 00:22:02.569 "params": { 00:22:02.569 "discovery_filter": "match_any", 00:22:02.569 "admin_cmd_passthru": { 00:22:02.569 "identify_ctrlr": false 00:22:02.569 } 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_set_max_subsystems", 00:22:02.569 "params": { 00:22:02.569 "max_subsystems": 1024 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_set_crdt", 00:22:02.569 "params": { 00:22:02.569 "crdt1": 0, 00:22:02.569 "crdt2": 0, 00:22:02.569 "crdt3": 0 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_create_transport", 00:22:02.569 "params": { 00:22:02.569 "trtype": "TCP", 00:22:02.569 "max_queue_depth": 128, 00:22:02.569 "max_io_qpairs_per_ctrlr": 127, 00:22:02.569 "in_capsule_data_size": 4096, 00:22:02.569 "max_io_size": 131072, 00:22:02.569 "io_unit_size": 131072, 00:22:02.569 "max_aq_depth": 128, 00:22:02.569 "num_shared_buffers": 511, 00:22:02.569 "buf_cache_size": 4294967295, 00:22:02.569 "dif_insert_or_strip": false, 00:22:02.569 "zcopy": false, 00:22:02.569 "c2h_success": false, 00:22:02.569 "sock_priority": 0, 00:22:02.569 "abort_timeout_sec": 1, 00:22:02.569 "ack_timeout": 0, 00:22:02.569 "data_wr_pool_size": 0 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_create_subsystem", 00:22:02.569 "params": { 00:22:02.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.569 "allow_any_host": false, 00:22:02.569 "serial_number": "00000000000000000000", 00:22:02.569 "model_number": "SPDK bdev Controller", 00:22:02.569 "max_namespaces": 32, 00:22:02.569 "min_cntlid": 1, 00:22:02.569 "max_cntlid": 65519, 00:22:02.569 "ana_reporting": false 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_subsystem_add_host", 00:22:02.569 "params": { 00:22:02.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.569 "host": "nqn.2016-06.io.spdk:host1", 00:22:02.569 "psk": "key0" 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_subsystem_add_ns", 00:22:02.569 "params": { 00:22:02.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.569 "namespace": { 00:22:02.569 "nsid": 1, 00:22:02.569 "bdev_name": "malloc0", 00:22:02.569 "nguid": "B77B5BC727634E7697E20E4A7FE7FA4A", 00:22:02.569 "uuid": "b77b5bc7-2763-4e76-97e2-0e4a7fe7fa4a", 00:22:02.569 "no_auto_visible": false 00:22:02.569 } 00:22:02.569 } 00:22:02.569 }, 00:22:02.569 { 00:22:02.569 "method": "nvmf_subsystem_add_listener", 00:22:02.569 "params": { 00:22:02.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.569 "listen_address": { 00:22:02.569 "trtype": "TCP", 00:22:02.569 "adrfam": "IPv4", 00:22:02.569 "traddr": "10.0.0.2", 00:22:02.569 "trsvcid": "4420" 00:22:02.570 }, 00:22:02.570 "secure_channel": false, 00:22:02.570 "sock_impl": "ssl" 00:22:02.570 } 00:22:02.570 } 00:22:02.570 ] 00:22:02.570 } 00:22:02.570 ] 00:22:02.570 }' 00:22:02.570 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:02.831 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:02.831 "subsystems": [ 00:22:02.831 { 00:22:02.831 "subsystem": "keyring", 00:22:02.831 "config": [ 00:22:02.831 { 00:22:02.831 "method": "keyring_file_add_key", 00:22:02.831 "params": { 00:22:02.831 "name": "key0", 00:22:02.831 "path": "/tmp/tmp.ZWvKTCpXne" 00:22:02.831 } 00:22:02.831 } 00:22:02.831 ] 00:22:02.831 }, 00:22:02.831 { 00:22:02.831 "subsystem": "iobuf", 00:22:02.831 "config": [ 00:22:02.831 { 00:22:02.831 "method": "iobuf_set_options", 00:22:02.831 "params": { 00:22:02.831 "small_pool_count": 8192, 00:22:02.831 "large_pool_count": 1024, 00:22:02.831 "small_bufsize": 8192, 00:22:02.831 "large_bufsize": 135168 00:22:02.831 } 00:22:02.831 } 00:22:02.831 ] 00:22:02.831 }, 00:22:02.831 { 00:22:02.831 "subsystem": "sock", 00:22:02.831 "config": [ 00:22:02.831 { 00:22:02.831 "method": "sock_set_default_impl", 00:22:02.831 "params": { 00:22:02.831 "impl_name": "posix" 00:22:02.831 } 00:22:02.831 }, 00:22:02.831 { 00:22:02.831 "method": "sock_impl_set_options", 00:22:02.831 "params": { 00:22:02.831 "impl_name": "ssl", 00:22:02.831 "recv_buf_size": 4096, 00:22:02.831 "send_buf_size": 4096, 00:22:02.831 "enable_recv_pipe": true, 00:22:02.831 "enable_quickack": false, 00:22:02.831 "enable_placement_id": 0, 00:22:02.831 "enable_zerocopy_send_server": true, 00:22:02.831 "enable_zerocopy_send_client": false, 00:22:02.831 "zerocopy_threshold": 0, 00:22:02.831 "tls_version": 0, 00:22:02.831 "enable_ktls": false 00:22:02.831 } 00:22:02.831 }, 00:22:02.831 { 00:22:02.831 "method": "sock_impl_set_options", 00:22:02.831 "params": { 00:22:02.831 "impl_name": "posix", 00:22:02.831 "recv_buf_size": 2097152, 00:22:02.831 "send_buf_size": 2097152, 00:22:02.831 "enable_recv_pipe": true, 00:22:02.831 "enable_quickack": false, 00:22:02.831 "enable_placement_id": 0, 00:22:02.831 "enable_zerocopy_send_server": true, 00:22:02.831 "enable_zerocopy_send_client": false, 00:22:02.831 "zerocopy_threshold": 0, 00:22:02.831 "tls_version": 0, 00:22:02.831 "enable_ktls": false 00:22:02.831 } 00:22:02.831 } 00:22:02.831 ] 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "subsystem": "vmd", 00:22:02.832 "config": [] 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "subsystem": "accel", 00:22:02.832 "config": [ 00:22:02.832 { 00:22:02.832 "method": "accel_set_options", 00:22:02.832 "params": { 00:22:02.832 "small_cache_size": 128, 00:22:02.832 "large_cache_size": 16, 00:22:02.832 "task_count": 2048, 00:22:02.832 "sequence_count": 2048, 00:22:02.832 "buf_count": 2048 00:22:02.832 } 00:22:02.832 } 00:22:02.832 ] 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "subsystem": "bdev", 00:22:02.832 "config": [ 00:22:02.832 { 00:22:02.832 "method": "bdev_set_options", 00:22:02.832 "params": { 00:22:02.832 "bdev_io_pool_size": 65535, 00:22:02.832 "bdev_io_cache_size": 256, 00:22:02.832 "bdev_auto_examine": true, 00:22:02.832 "iobuf_small_cache_size": 128, 00:22:02.832 "iobuf_large_cache_size": 16 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_raid_set_options", 00:22:02.832 "params": { 00:22:02.832 "process_window_size_kb": 1024, 00:22:02.832 "process_max_bandwidth_mb_sec": 0 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_iscsi_set_options", 00:22:02.832 "params": { 00:22:02.832 "timeout_sec": 30 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_nvme_set_options", 00:22:02.832 "params": { 00:22:02.832 "action_on_timeout": "none", 00:22:02.832 "timeout_us": 0, 00:22:02.832 "timeout_admin_us": 0, 00:22:02.832 "keep_alive_timeout_ms": 10000, 00:22:02.832 "arbitration_burst": 0, 00:22:02.832 "low_priority_weight": 0, 00:22:02.832 "medium_priority_weight": 0, 00:22:02.832 "high_priority_weight": 0, 00:22:02.832 "nvme_adminq_poll_period_us": 10000, 00:22:02.832 "nvme_ioq_poll_period_us": 0, 00:22:02.832 "io_queue_requests": 512, 00:22:02.832 "delay_cmd_submit": true, 00:22:02.832 "transport_retry_count": 4, 00:22:02.832 "bdev_retry_count": 3, 00:22:02.832 "transport_ack_timeout": 0, 00:22:02.832 "ctrlr_loss_timeout_sec": 0, 00:22:02.832 "reconnect_delay_sec": 0, 00:22:02.832 "fast_io_fail_timeout_sec": 0, 00:22:02.832 "disable_auto_failback": false, 00:22:02.832 "generate_uuids": false, 00:22:02.832 "transport_tos": 0, 00:22:02.832 "nvme_error_stat": false, 00:22:02.832 "rdma_srq_size": 0, 00:22:02.832 "io_path_stat": false, 00:22:02.832 "allow_accel_sequence": false, 00:22:02.832 "rdma_max_cq_size": 0, 00:22:02.832 "rdma_cm_event_timeout_ms": 0, 00:22:02.832 "dhchap_digests": [ 00:22:02.832 "sha256", 00:22:02.832 "sha384", 00:22:02.832 "sha512" 00:22:02.832 ], 00:22:02.832 "dhchap_dhgroups": [ 00:22:02.832 "null", 00:22:02.832 "ffdhe2048", 00:22:02.832 "ffdhe3072", 00:22:02.832 "ffdhe4096", 00:22:02.832 "ffdhe6144", 00:22:02.832 "ffdhe8192" 00:22:02.832 ] 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_nvme_attach_controller", 00:22:02.832 "params": { 00:22:02.832 "name": "nvme0", 00:22:02.832 "trtype": "TCP", 00:22:02.832 "adrfam": "IPv4", 00:22:02.832 "traddr": "10.0.0.2", 00:22:02.832 "trsvcid": "4420", 00:22:02.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.832 "prchk_reftag": false, 00:22:02.832 "prchk_guard": false, 00:22:02.832 "ctrlr_loss_timeout_sec": 0, 00:22:02.832 "reconnect_delay_sec": 0, 00:22:02.832 "fast_io_fail_timeout_sec": 0, 00:22:02.832 "psk": "key0", 00:22:02.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.832 "hdgst": false, 00:22:02.832 "ddgst": false 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_nvme_set_hotplug", 00:22:02.832 "params": { 00:22:02.832 "period_us": 100000, 00:22:02.832 "enable": false 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_enable_histogram", 00:22:02.832 "params": { 00:22:02.832 "name": "nvme0n1", 00:22:02.832 "enable": true 00:22:02.832 } 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "method": "bdev_wait_for_examine" 00:22:02.832 } 00:22:02.832 ] 00:22:02.832 }, 00:22:02.832 { 00:22:02.832 "subsystem": "nbd", 00:22:02.832 "config": [] 00:22:02.832 } 00:22:02.832 ] 00:22:02.832 }' 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1472431 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1472431 ']' 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1472431 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472431 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472431' 00:22:02.832 killing process with pid 1472431 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1472431 00:22:02.832 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.832 00:22:02.832 Latency(us) 00:22:02.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.832 =================================================================================================================== 00:22:02.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.832 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1472431 00:22:02.832 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1472234 00:22:02.832 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1472234 ']' 00:22:02.832 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1472234 00:22:02.832 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.832 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.832 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472234 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472234' 00:22:03.095 killing process with pid 1472234 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1472234 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1472234 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.095 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:03.095 "subsystems": [ 00:22:03.095 { 00:22:03.095 "subsystem": "keyring", 00:22:03.095 "config": [ 00:22:03.095 { 00:22:03.095 "method": "keyring_file_add_key", 00:22:03.095 "params": { 00:22:03.095 "name": "key0", 00:22:03.095 "path": "/tmp/tmp.ZWvKTCpXne" 00:22:03.095 } 00:22:03.095 } 00:22:03.095 ] 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "subsystem": "iobuf", 00:22:03.095 "config": [ 00:22:03.095 { 00:22:03.095 "method": "iobuf_set_options", 00:22:03.095 "params": { 00:22:03.095 "small_pool_count": 8192, 00:22:03.095 "large_pool_count": 1024, 00:22:03.095 "small_bufsize": 8192, 00:22:03.095 "large_bufsize": 135168 00:22:03.095 } 00:22:03.095 } 00:22:03.095 ] 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "subsystem": "sock", 00:22:03.095 "config": [ 00:22:03.095 { 00:22:03.095 "method": "sock_set_default_impl", 00:22:03.095 "params": { 00:22:03.095 "impl_name": "posix" 00:22:03.095 } 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "method": "sock_impl_set_options", 00:22:03.095 "params": { 00:22:03.095 "impl_name": "ssl", 00:22:03.095 "recv_buf_size": 4096, 00:22:03.095 "send_buf_size": 4096, 00:22:03.095 "enable_recv_pipe": true, 00:22:03.095 "enable_quickack": false, 00:22:03.095 "enable_placement_id": 0, 00:22:03.095 "enable_zerocopy_send_server": true, 00:22:03.095 "enable_zerocopy_send_client": false, 00:22:03.095 "zerocopy_threshold": 0, 00:22:03.095 "tls_version": 0, 00:22:03.095 "enable_ktls": false 00:22:03.095 } 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "method": "sock_impl_set_options", 00:22:03.095 "params": { 00:22:03.095 "impl_name": "posix", 00:22:03.095 "recv_buf_size": 2097152, 00:22:03.095 "send_buf_size": 2097152, 00:22:03.095 "enable_recv_pipe": true, 00:22:03.095 "enable_quickack": false, 00:22:03.095 "enable_placement_id": 0, 00:22:03.095 "enable_zerocopy_send_server": true, 00:22:03.095 "enable_zerocopy_send_client": false, 00:22:03.095 "zerocopy_threshold": 0, 00:22:03.095 "tls_version": 0, 00:22:03.095 "enable_ktls": false 00:22:03.095 } 00:22:03.095 } 00:22:03.095 ] 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "subsystem": "vmd", 00:22:03.095 "config": [] 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "subsystem": "accel", 00:22:03.095 "config": [ 00:22:03.095 { 00:22:03.095 "method": "accel_set_options", 00:22:03.095 "params": { 00:22:03.095 "small_cache_size": 128, 00:22:03.095 "large_cache_size": 16, 00:22:03.095 "task_count": 2048, 00:22:03.095 "sequence_count": 2048, 00:22:03.095 "buf_count": 2048 00:22:03.095 } 00:22:03.095 } 00:22:03.095 ] 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "subsystem": "bdev", 00:22:03.095 "config": [ 00:22:03.095 { 00:22:03.095 "method": "bdev_set_options", 00:22:03.095 "params": { 00:22:03.095 "bdev_io_pool_size": 65535, 00:22:03.095 "bdev_io_cache_size": 256, 00:22:03.095 "bdev_auto_examine": true, 00:22:03.095 "iobuf_small_cache_size": 128, 00:22:03.095 "iobuf_large_cache_size": 16 00:22:03.095 } 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "method": "bdev_raid_set_options", 00:22:03.095 "params": { 00:22:03.095 "process_window_size_kb": 1024, 00:22:03.095 "process_max_bandwidth_mb_sec": 0 00:22:03.095 } 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "method": "bdev_iscsi_set_options", 00:22:03.095 "params": { 00:22:03.095 "timeout_sec": 30 00:22:03.095 } 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "method": "bdev_nvme_set_options", 00:22:03.095 "params": { 00:22:03.095 "action_on_timeout": "none", 00:22:03.095 "timeout_us": 0, 00:22:03.095 "timeout_admin_us": 0, 00:22:03.095 "keep_alive_timeout_ms": 10000, 00:22:03.095 "arbitration_burst": 0, 00:22:03.095 "low_priority_weight": 0, 00:22:03.095 "medium_priority_weight": 0, 00:22:03.095 "high_priority_weight": 0, 00:22:03.095 "nvme_adminq_poll_period_us": 10000, 00:22:03.095 "nvme_ioq_poll_period_us": 0, 00:22:03.095 "io_queue_requests": 0, 00:22:03.095 "delay_cmd_submit": true, 00:22:03.095 "transport_retry_count": 4, 00:22:03.095 "bdev_retry_count": 3, 00:22:03.095 "transport_ack_timeout": 0, 00:22:03.095 "ctrlr_loss_timeout_sec": 0, 00:22:03.095 "reconnect_delay_sec": 0, 00:22:03.095 "fast_io_fail_timeout_sec": 0, 00:22:03.095 "disable_auto_failback": false, 00:22:03.095 "generate_uuids": false, 00:22:03.095 "transport_tos": 0, 00:22:03.095 "nvme_error_stat": false, 00:22:03.095 "rdma_srq_size": 0, 00:22:03.095 "io_path_stat": false, 00:22:03.095 "allow_accel_sequence": false, 00:22:03.095 "rdma_max_cq_size": 0, 00:22:03.095 "rdma_cm_event_timeout_ms": 0, 00:22:03.095 "dhchap_digests": [ 00:22:03.096 "sha256", 00:22:03.096 "sha384", 00:22:03.096 "sha512" 00:22:03.096 ], 00:22:03.096 "dhchap_dhgroups": [ 00:22:03.096 "null", 00:22:03.096 "ffdhe2048", 00:22:03.096 "ffdhe3072", 00:22:03.096 "ffdhe4096", 00:22:03.096 "ffdhe6144", 00:22:03.096 "ffdhe8192" 00:22:03.096 ] 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "bdev_nvme_set_hotplug", 00:22:03.096 "params": { 00:22:03.096 "period_us": 100000, 00:22:03.096 "enable": false 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "bdev_malloc_create", 00:22:03.096 "params": { 00:22:03.096 "name": "malloc0", 00:22:03.096 "num_blocks": 8192, 00:22:03.096 "block_size": 4096, 00:22:03.096 "physical_block_size": 4096, 00:22:03.096 "uuid": "b77b5bc7-2763-4e76-97e2-0e4a7fe7fa4a", 00:22:03.096 "optimal_io_boundary": 0, 00:22:03.096 "md_size": 0, 00:22:03.096 "dif_type": 0, 00:22:03.096 "dif_is_head_of_md": false, 00:22:03.096 "dif_pi_format": 0 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "bdev_wait_for_examine" 00:22:03.096 } 00:22:03.096 ] 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "subsystem": "nbd", 00:22:03.096 "config": [] 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "subsystem": "scheduler", 00:22:03.096 "config": [ 00:22:03.096 { 00:22:03.096 "method": "framework_set_scheduler", 00:22:03.096 "params": { 00:22:03.096 "name": "static" 00:22:03.096 } 00:22:03.096 } 00:22:03.096 ] 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "subsystem": "nvmf", 00:22:03.096 "config": [ 00:22:03.096 { 00:22:03.096 "method": "nvmf_set_config", 00:22:03.096 "params": { 00:22:03.096 "discovery_filter": "match_any", 00:22:03.096 "admin_cmd_passthru": { 00:22:03.096 "identify_ctrlr": false 00:22:03.096 } 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_set_max_subsystems", 00:22:03.096 "params": { 00:22:03.096 "max_subsystems": 1024 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_set_crdt", 00:22:03.096 "params": { 00:22:03.096 "crdt1": 0, 00:22:03.096 "crdt2": 0, 00:22:03.096 "crdt3": 0 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_create_transport", 00:22:03.096 "params": { 00:22:03.096 "trtype": "TCP", 00:22:03.096 "max_queue_depth": 128, 00:22:03.096 "max_io_qpairs_per_ctrlr": 127, 00:22:03.096 "in_capsule_data_size": 4096, 00:22:03.096 "max_io_size": 131072, 00:22:03.096 "io_unit_size": 131072, 00:22:03.096 "max_aq_depth": 128, 00:22:03.096 "num_shared_buffers": 511, 00:22:03.096 "buf_cache_size": 4294967295, 00:22:03.096 "dif_insert_or_strip": false, 00:22:03.096 "zcopy": false, 00:22:03.096 "c2h_success": false, 00:22:03.096 "sock_priority": 0, 00:22:03.096 "abort_timeout_sec": 1, 00:22:03.096 "ack_timeout": 0, 00:22:03.096 "data_wr_pool_size": 0 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_create_subsystem", 00:22:03.096 "params": { 00:22:03.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.096 "allow_any_host": false, 00:22:03.096 "serial_number": "00000000000000000000", 00:22:03.096 "model_number": "SPDK bdev Controller", 00:22:03.096 "max_namespaces": 32, 00:22:03.096 "min_cntlid": 1, 00:22:03.096 "max_cntlid": 65519, 00:22:03.096 "ana_reporting": false 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_subsystem_add_host", 00:22:03.096 "params": { 00:22:03.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.096 "host": "nqn.2016-06.io.spdk:host1", 00:22:03.096 "psk": "key0" 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_subsystem_add_ns", 00:22:03.096 "params": { 00:22:03.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.096 "namespace": { 00:22:03.096 "nsid": 1, 00:22:03.096 "bdev_name": "malloc0", 00:22:03.096 "nguid": "B77B5BC727634E7697E20E4A7FE7FA4A", 00:22:03.096 "uuid": "b77b5bc7-2763-4e76-97e2-0e4a7fe7fa4a", 00:22:03.096 "no_auto_visible": false 00:22:03.096 } 00:22:03.096 } 00:22:03.096 }, 00:22:03.096 { 00:22:03.096 "method": "nvmf_subsystem_add_listener", 00:22:03.096 "params": { 00:22:03.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.096 "listen_address": { 00:22:03.096 "trtype": "TCP", 00:22:03.096 "adrfam": "IPv4", 00:22:03.096 "traddr": "10.0.0.2", 00:22:03.096 "trsvcid": "4420" 00:22:03.096 }, 00:22:03.096 "secure_channel": false, 00:22:03.096 "sock_impl": "ssl" 00:22:03.096 } 00:22:03.096 } 00:22:03.096 ] 00:22:03.096 } 00:22:03.096 ] 00:22:03.096 }' 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1473117 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1473117 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1473117 ']' 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.096 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.096 [2024-07-25 17:01:23.329700] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:22:03.096 [2024-07-25 17:01:23.329757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.096 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.357 [2024-07-25 17:01:23.394585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.357 [2024-07-25 17:01:23.459882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.357 [2024-07-25 17:01:23.459919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.357 [2024-07-25 17:01:23.459927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.357 [2024-07-25 17:01:23.459933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.357 [2024-07-25 17:01:23.459939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.357 [2024-07-25 17:01:23.459987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.618 [2024-07-25 17:01:23.657189] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.618 [2024-07-25 17:01:23.700236] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.618 [2024-07-25 17:01:23.700440] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1473156 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1473156 /var/tmp/bdevperf.sock 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1473156 ']' 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.880 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:03.880 "subsystems": [ 00:22:03.880 { 00:22:03.880 "subsystem": "keyring", 00:22:03.880 "config": [ 00:22:03.880 { 00:22:03.880 "method": "keyring_file_add_key", 00:22:03.880 "params": { 00:22:03.880 "name": "key0", 00:22:03.880 "path": "/tmp/tmp.ZWvKTCpXne" 00:22:03.880 } 00:22:03.880 } 00:22:03.880 ] 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "subsystem": "iobuf", 00:22:03.880 "config": [ 00:22:03.880 { 00:22:03.880 "method": "iobuf_set_options", 00:22:03.880 "params": { 00:22:03.880 "small_pool_count": 8192, 00:22:03.880 "large_pool_count": 1024, 00:22:03.880 "small_bufsize": 8192, 00:22:03.880 "large_bufsize": 135168 00:22:03.880 } 00:22:03.880 } 00:22:03.880 ] 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "subsystem": "sock", 00:22:03.880 "config": [ 00:22:03.880 { 00:22:03.880 "method": "sock_set_default_impl", 00:22:03.880 "params": { 00:22:03.880 "impl_name": "posix" 00:22:03.880 } 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "method": "sock_impl_set_options", 00:22:03.880 "params": { 00:22:03.880 "impl_name": "ssl", 00:22:03.880 "recv_buf_size": 4096, 00:22:03.880 "send_buf_size": 4096, 00:22:03.880 "enable_recv_pipe": true, 00:22:03.880 "enable_quickack": false, 00:22:03.880 "enable_placement_id": 0, 00:22:03.880 "enable_zerocopy_send_server": true, 00:22:03.880 "enable_zerocopy_send_client": false, 00:22:03.880 "zerocopy_threshold": 0, 00:22:03.880 "tls_version": 0, 00:22:03.880 "enable_ktls": false 00:22:03.880 } 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "method": "sock_impl_set_options", 00:22:03.880 "params": { 00:22:03.880 "impl_name": "posix", 00:22:03.880 "recv_buf_size": 2097152, 00:22:03.880 "send_buf_size": 2097152, 00:22:03.880 "enable_recv_pipe": true, 00:22:03.880 "enable_quickack": false, 00:22:03.880 "enable_placement_id": 0, 00:22:03.880 "enable_zerocopy_send_server": true, 00:22:03.880 "enable_zerocopy_send_client": false, 00:22:03.880 "zerocopy_threshold": 0, 00:22:03.880 "tls_version": 0, 00:22:03.880 "enable_ktls": false 00:22:03.880 } 00:22:03.880 } 00:22:03.880 ] 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "subsystem": "vmd", 00:22:03.880 "config": [] 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "subsystem": "accel", 00:22:03.880 "config": [ 00:22:03.880 { 00:22:03.880 "method": "accel_set_options", 00:22:03.880 "params": { 00:22:03.880 "small_cache_size": 128, 00:22:03.880 "large_cache_size": 16, 00:22:03.880 "task_count": 2048, 00:22:03.880 "sequence_count": 2048, 00:22:03.880 "buf_count": 2048 00:22:03.880 } 00:22:03.880 } 00:22:03.880 ] 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "subsystem": "bdev", 00:22:03.880 "config": [ 00:22:03.880 { 00:22:03.880 "method": "bdev_set_options", 00:22:03.880 "params": { 00:22:03.880 "bdev_io_pool_size": 65535, 00:22:03.880 "bdev_io_cache_size": 256, 00:22:03.880 "bdev_auto_examine": true, 00:22:03.880 "iobuf_small_cache_size": 128, 00:22:03.880 "iobuf_large_cache_size": 16 00:22:03.880 } 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "method": "bdev_raid_set_options", 00:22:03.880 "params": { 00:22:03.880 "process_window_size_kb": 1024, 00:22:03.880 "process_max_bandwidth_mb_sec": 0 00:22:03.880 } 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "method": "bdev_iscsi_set_options", 00:22:03.880 "params": { 00:22:03.880 "timeout_sec": 30 00:22:03.880 } 00:22:03.880 }, 00:22:03.880 { 00:22:03.880 "method": "bdev_nvme_set_options", 00:22:03.880 "params": { 00:22:03.880 "action_on_timeout": "none", 00:22:03.880 "timeout_us": 0, 00:22:03.880 "timeout_admin_us": 0, 00:22:03.880 "keep_alive_timeout_ms": 10000, 00:22:03.880 "arbitration_burst": 0, 00:22:03.880 "low_priority_weight": 0, 00:22:03.880 "medium_priority_weight": 0, 00:22:03.880 "high_priority_weight": 0, 00:22:03.880 "nvme_adminq_poll_period_us": 10000, 00:22:03.880 "nvme_ioq_poll_period_us": 0, 00:22:03.880 "io_queue_requests": 512, 00:22:03.881 "delay_cmd_submit": true, 00:22:03.881 "transport_retry_count": 4, 00:22:03.881 "bdev_retry_count": 3, 00:22:03.881 "transport_ack_timeout": 0, 00:22:03.881 "ctrlr_loss_timeout_sec": 0, 00:22:03.881 "reconnect_delay_sec": 0, 00:22:03.881 "fast_io_fail_timeout_sec": 0, 00:22:03.881 "disable_auto_failback": false, 00:22:03.881 "generate_uuids": false, 00:22:03.881 "transport_tos": 0, 00:22:03.881 "nvme_error_stat": false, 00:22:03.881 "rdma_srq_size": 0, 00:22:03.881 "io_path_stat": false, 00:22:03.881 "allow_accel_sequence": false, 00:22:03.881 "rdma_max_cq_size": 0, 00:22:03.881 "rdma_cm_event_timeout_ms": 0, 00:22:03.881 "dhchap_digests": [ 00:22:03.881 "sha256", 00:22:03.881 "sha384", 00:22:03.881 "sha512" 00:22:03.881 ], 00:22:03.881 "dhchap_dhgroups": [ 00:22:03.881 "null", 00:22:03.881 "ffdhe2048", 00:22:03.881 "ffdhe3072", 00:22:03.881 "ffdhe4096", 00:22:03.881 "ffdhe6144", 00:22:03.881 "ffdhe8192" 00:22:03.881 ] 00:22:03.881 } 00:22:03.881 }, 00:22:03.881 { 00:22:03.881 "method": "bdev_nvme_attach_controller", 00:22:03.881 "params": { 00:22:03.881 "name": "nvme0", 00:22:03.881 "trtype": "TCP", 00:22:03.881 "adrfam": "IPv4", 00:22:03.881 "traddr": "10.0.0.2", 00:22:03.881 "trsvcid": "4420", 00:22:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.881 "prchk_reftag": false, 00:22:03.881 "prchk_guard": false, 00:22:03.881 "ctrlr_loss_timeout_sec": 0, 00:22:03.881 "reconnect_delay_sec": 0, 00:22:03.881 "fast_io_fail_timeout_sec": 0, 00:22:03.881 "psk": "key0", 00:22:03.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.881 "hdgst": false, 00:22:03.881 "ddgst": false 00:22:03.881 } 00:22:03.881 }, 00:22:03.881 { 00:22:03.881 "method": "bdev_nvme_set_hotplug", 00:22:03.881 "params": { 00:22:03.881 "period_us": 100000, 00:22:03.881 "enable": false 00:22:03.881 } 00:22:03.881 }, 00:22:03.881 { 00:22:03.881 "method": "bdev_enable_histogram", 00:22:03.881 "params": { 00:22:03.881 "name": "nvme0n1", 00:22:03.881 "enable": true 00:22:03.881 } 00:22:03.881 }, 00:22:03.881 { 00:22:03.881 "method": "bdev_wait_for_examine" 00:22:03.881 } 00:22:03.881 ] 00:22:03.881 }, 00:22:03.881 { 00:22:03.881 "subsystem": "nbd", 00:22:03.881 "config": [] 00:22:03.881 } 00:22:03.881 ] 00:22:03.881 }' 00:22:04.142 [2024-07-25 17:01:24.172188] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:22:04.142 [2024-07-25 17:01:24.172245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473156 ] 00:22:04.142 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.142 [2024-07-25 17:01:24.226536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.142 [2024-07-25 17:01:24.281074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.142 [2024-07-25 17:01:24.414504] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.086 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.086 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:05.086 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.086 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:05.086 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.086 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.086 Running I/O for 1 seconds... 00:22:06.473 00:22:06.473 Latency(us) 00:22:06.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.473 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:06.473 Verification LBA range: start 0x0 length 0x2000 00:22:06.473 nvme0n1 : 1.08 1494.70 5.84 0.00 0.00 82984.30 6089.39 131072.00 00:22:06.473 =================================================================================================================== 00:22:06.473 Total : 1494.70 5.84 0.00 0.00 82984.30 6089.39 131072.00 00:22:06.473 0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:06.473 nvmf_trace.0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1473156 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1473156 ']' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1473156 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473156 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473156' 00:22:06.473 killing process with pid 1473156 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1473156 00:22:06.473 Received shutdown signal, test time was about 1.000000 seconds 00:22:06.473 00:22:06.473 Latency(us) 00:22:06.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.473 =================================================================================================================== 00:22:06.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1473156 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.473 rmmod nvme_tcp 00:22:06.473 rmmod nvme_fabrics 00:22:06.473 rmmod nvme_keyring 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1473117 ']' 00:22:06.473 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1473117 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1473117 ']' 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1473117 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473117 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473117' 00:22:06.474 killing process with pid 1473117 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1473117 00:22:06.474 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1473117 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.736 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.654 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.654 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.O7JSHuf82m /tmp/tmp.Onuwh9iMUm /tmp/tmp.ZWvKTCpXne 00:22:08.916 00:22:08.916 real 1m23.231s 00:22:08.916 user 2m5.025s 00:22:08.916 sys 0m29.617s 00:22:08.916 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:08.916 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.916 ************************************ 00:22:08.916 END TEST nvmf_tls 00:22:08.916 ************************************ 00:22:08.916 17:01:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:08.916 17:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:08.916 17:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:08.916 17:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:08.916 ************************************ 00:22:08.916 START TEST nvmf_fips 00:22:08.916 ************************************ 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:08.916 * Looking for test storage... 00:22:08.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.916 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:08.917 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:09.179 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:09.180 Error setting digest 00:22:09.180 00C25392457F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:09.180 00C25392457F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.180 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:17.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:17.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.332 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:17.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:17.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:22:17.333 00:22:17.333 --- 10.0.0.2 ping statistics --- 00:22:17.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.333 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.503 ms 00:22:17.333 00:22:17.333 --- 10.0.0.1 ping statistics --- 00:22:17.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.333 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1477846 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1477846 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1477846 ']' 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.333 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:17.333 [2024-07-25 17:01:36.477315] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:22:17.333 [2024-07-25 17:01:36.477372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.333 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.333 [2024-07-25 17:01:36.554378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.333 [2024-07-25 17:01:36.643371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.333 [2024-07-25 17:01:36.643419] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.333 [2024-07-25 17:01:36.643428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.333 [2024-07-25 17:01:36.643435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.333 [2024-07-25 17:01:36.643441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.333 [2024-07-25 17:01:36.643464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:17.333 [2024-07-25 17:01:37.464922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.333 [2024-07-25 17:01:37.480925] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.333 [2024-07-25 17:01:37.481216] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.333 [2024-07-25 17:01:37.511021] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:17.333 malloc0 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1478196 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1478196 /var/tmp/bdevperf.sock 00:22:17.333 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:17.334 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1478196 ']' 00:22:17.334 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.334 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.334 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.334 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.334 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:17.595 [2024-07-25 17:01:37.615893] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:22:17.595 [2024-07-25 17:01:37.615967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478196 ] 00:22:17.595 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.595 [2024-07-25 17:01:37.671707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.595 [2024-07-25 17:01:37.736199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.166 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:18.166 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:18.166 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:18.428 [2024-07-25 17:01:38.491722] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:18.428 [2024-07-25 17:01:38.491786] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:18.428 TLSTESTn1 00:22:18.428 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:18.689 Running I/O for 10 seconds... 00:22:28.700 00:22:28.700 Latency(us) 00:22:28.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.700 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:28.700 Verification LBA range: start 0x0 length 0x2000 00:22:28.700 TLSTESTn1 : 10.07 1966.80 7.68 0.00 0.00 64855.31 6116.69 152917.33 00:22:28.700 =================================================================================================================== 00:22:28.700 Total : 1966.80 7.68 0.00 0.00 64855.31 6116.69 152917.33 00:22:28.700 0 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:28.700 nvmf_trace.0 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1478196 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1478196 ']' 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1478196 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1478196 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1478196' 00:22:28.700 killing process with pid 1478196 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1478196 00:22:28.700 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.700 00:22:28.700 Latency(us) 00:22:28.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.700 =================================================================================================================== 00:22:28.700 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.700 [2024-07-25 17:01:48.958332] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:28.700 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1478196 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.962 rmmod nvme_tcp 00:22:28.962 rmmod nvme_fabrics 00:22:28.962 rmmod nvme_keyring 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1477846 ']' 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1477846 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1477846 ']' 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1477846 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477846 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477846' 00:22:28.962 killing process with pid 1477846 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1477846 00:22:28.962 [2024-07-25 17:01:49.200664] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:28.962 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1477846 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.226 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.184 00:22:31.184 real 0m22.391s 00:22:31.184 user 0m22.663s 00:22:31.184 sys 0m10.345s 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:31.184 ************************************ 00:22:31.184 END TEST nvmf_fips 00:22:31.184 ************************************ 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.184 17:01:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:39.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:39.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.334 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:39.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:39.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.335 ************************************ 00:22:39.335 START TEST nvmf_perf_adq 00:22:39.335 ************************************ 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.335 * Looking for test storage... 00:22:39.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.335 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:45.930 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:45.930 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:45.930 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:45.930 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:45.930 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:46.503 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:48.434 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:53.729 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:53.729 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:53.729 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.729 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:53.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:53.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:22:53.730 00:22:53.730 --- 10.0.0.2 ping statistics --- 00:22:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.730 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:22:53.730 00:22:53.730 --- 10.0.0.1 ping statistics --- 00:22:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.730 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1489891 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1489891 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1489891 ']' 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.730 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:53.730 [2024-07-25 17:02:13.865654] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:22:53.730 [2024-07-25 17:02:13.865725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.730 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.730 [2024-07-25 17:02:13.937707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.991 [2024-07-25 17:02:14.015310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.991 [2024-07-25 17:02:14.015351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.991 [2024-07-25 17:02:14.015359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.991 [2024-07-25 17:02:14.015366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.992 [2024-07-25 17:02:14.015371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.992 [2024-07-25 17:02:14.015506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.992 [2024-07-25 17:02:14.015626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.992 [2024-07-25 17:02:14.015782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.992 [2024-07-25 17:02:14.015784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.564 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.565 [2024-07-25 17:02:14.818551] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.565 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.826 Malloc1 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.826 [2024-07-25 17:02:14.861818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1490131 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:54.826 17:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:54.826 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:56.744 "tick_rate": 2400000000, 00:22:56.744 "poll_groups": [ 00:22:56.744 { 00:22:56.744 "name": "nvmf_tgt_poll_group_000", 00:22:56.744 "admin_qpairs": 1, 00:22:56.744 "io_qpairs": 1, 00:22:56.744 "current_admin_qpairs": 1, 00:22:56.744 "current_io_qpairs": 1, 00:22:56.744 "pending_bdev_io": 0, 00:22:56.744 "completed_nvme_io": 18550, 00:22:56.744 "transports": [ 00:22:56.744 { 00:22:56.744 "trtype": "TCP" 00:22:56.744 } 00:22:56.744 ] 00:22:56.744 }, 00:22:56.744 { 00:22:56.744 "name": "nvmf_tgt_poll_group_001", 00:22:56.744 "admin_qpairs": 0, 00:22:56.744 "io_qpairs": 1, 00:22:56.744 "current_admin_qpairs": 0, 00:22:56.744 "current_io_qpairs": 1, 00:22:56.744 "pending_bdev_io": 0, 00:22:56.744 "completed_nvme_io": 28911, 00:22:56.744 "transports": [ 00:22:56.744 { 00:22:56.744 "trtype": "TCP" 00:22:56.744 } 00:22:56.744 ] 00:22:56.744 }, 00:22:56.744 { 00:22:56.744 "name": "nvmf_tgt_poll_group_002", 00:22:56.744 "admin_qpairs": 0, 00:22:56.744 "io_qpairs": 1, 00:22:56.744 "current_admin_qpairs": 0, 00:22:56.744 "current_io_qpairs": 1, 00:22:56.744 "pending_bdev_io": 0, 00:22:56.744 "completed_nvme_io": 20645, 00:22:56.744 "transports": [ 00:22:56.744 { 00:22:56.744 "trtype": "TCP" 00:22:56.744 } 00:22:56.744 ] 00:22:56.744 }, 00:22:56.744 { 00:22:56.744 "name": "nvmf_tgt_poll_group_003", 00:22:56.744 "admin_qpairs": 0, 00:22:56.744 "io_qpairs": 1, 00:22:56.744 "current_admin_qpairs": 0, 00:22:56.744 "current_io_qpairs": 1, 00:22:56.744 "pending_bdev_io": 0, 00:22:56.744 "completed_nvme_io": 20431, 00:22:56.744 "transports": [ 00:22:56.744 { 00:22:56.744 "trtype": "TCP" 00:22:56.744 } 00:22:56.744 ] 00:22:56.744 } 00:22:56.744 ] 00:22:56.744 }' 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:56.744 17:02:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1490131 00:23:04.888 Initializing NVMe Controllers 00:23:04.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:04.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:04.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:04.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:04.888 Initialization complete. Launching workers. 00:23:04.888 ======================================================== 00:23:04.888 Latency(us) 00:23:04.888 Device Information : IOPS MiB/s Average min max 00:23:04.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11481.30 44.85 5574.79 1384.35 8599.55 00:23:04.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14663.20 57.28 4375.80 1435.77 45395.66 00:23:04.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13975.70 54.59 4578.89 1587.81 11872.17 00:23:04.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12396.10 48.42 5163.12 1883.90 46162.95 00:23:04.888 ======================================================== 00:23:04.888 Total : 52516.29 205.14 4877.82 1384.35 46162.95 00:23:04.888 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.888 rmmod nvme_tcp 00:23:04.888 rmmod nvme_fabrics 00:23:04.888 rmmod nvme_keyring 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1489891 ']' 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1489891 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1489891 ']' 00:23:04.888 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1489891 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1489891 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1489891' 00:23:05.149 killing process with pid 1489891 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1489891 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1489891 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.149 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.723 17:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.723 17:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:07.723 17:02:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:09.109 17:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:11.023 17:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:16.001 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:16.001 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.001 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.002 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.002 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.002 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:23:16.002 00:23:16.002 --- 10.0.0.2 ping statistics --- 00:23:16.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.003 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:23:16.003 00:23:16.003 --- 10.0.0.1 ping statistics --- 00:23:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.003 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.003 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:16.263 net.core.busy_poll = 1 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:16.263 net.core.busy_read = 1 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:16.263 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1494827 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1494827 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1494827 ']' 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.524 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.524 [2024-07-25 17:02:36.615940] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:16.524 [2024-07-25 17:02:36.616001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.524 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.524 [2024-07-25 17:02:36.685987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.524 [2024-07-25 17:02:36.758611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.527 [2024-07-25 17:02:36.758651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.527 [2024-07-25 17:02:36.758659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.527 [2024-07-25 17:02:36.758665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.527 [2024-07-25 17:02:36.758671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.527 [2024-07-25 17:02:36.758811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.527 [2024-07-25 17:02:36.758924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.527 [2024-07-25 17:02:36.759080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.527 [2024-07-25 17:02:36.759081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:17.471 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 [2024-07-25 17:02:37.570543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 Malloc1 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.472 [2024-07-25 17:02:37.629973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1494958 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:17.472 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:17.472 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.389 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:19.389 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.389 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.389 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.389 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:19.389 "tick_rate": 2400000000, 00:23:19.389 "poll_groups": [ 00:23:19.389 { 00:23:19.389 "name": "nvmf_tgt_poll_group_000", 00:23:19.389 "admin_qpairs": 1, 00:23:19.389 "io_qpairs": 1, 00:23:19.389 "current_admin_qpairs": 1, 00:23:19.389 "current_io_qpairs": 1, 00:23:19.389 "pending_bdev_io": 0, 00:23:19.389 "completed_nvme_io": 25763, 00:23:19.389 "transports": [ 00:23:19.389 { 00:23:19.389 "trtype": "TCP" 00:23:19.389 } 00:23:19.389 ] 00:23:19.389 }, 00:23:19.389 { 00:23:19.389 "name": "nvmf_tgt_poll_group_001", 00:23:19.389 "admin_qpairs": 0, 00:23:19.389 "io_qpairs": 3, 00:23:19.389 "current_admin_qpairs": 0, 00:23:19.389 "current_io_qpairs": 3, 00:23:19.389 "pending_bdev_io": 0, 00:23:19.389 "completed_nvme_io": 42999, 00:23:19.389 "transports": [ 00:23:19.389 { 00:23:19.389 "trtype": "TCP" 00:23:19.389 } 00:23:19.389 ] 00:23:19.389 }, 00:23:19.389 { 00:23:19.389 "name": "nvmf_tgt_poll_group_002", 00:23:19.389 "admin_qpairs": 0, 00:23:19.389 "io_qpairs": 0, 00:23:19.389 "current_admin_qpairs": 0, 00:23:19.389 "current_io_qpairs": 0, 00:23:19.389 "pending_bdev_io": 0, 00:23:19.389 "completed_nvme_io": 0, 00:23:19.389 "transports": [ 00:23:19.389 { 00:23:19.389 "trtype": "TCP" 00:23:19.389 } 00:23:19.389 ] 00:23:19.389 }, 00:23:19.389 { 00:23:19.389 "name": "nvmf_tgt_poll_group_003", 00:23:19.389 "admin_qpairs": 0, 00:23:19.389 "io_qpairs": 0, 00:23:19.389 "current_admin_qpairs": 0, 00:23:19.389 "current_io_qpairs": 0, 00:23:19.389 "pending_bdev_io": 0, 00:23:19.389 "completed_nvme_io": 0, 00:23:19.389 "transports": [ 00:23:19.389 { 00:23:19.389 "trtype": "TCP" 00:23:19.389 } 00:23:19.389 ] 00:23:19.389 } 00:23:19.390 ] 00:23:19.390 }' 00:23:19.651 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:19.651 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:19.651 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:19.651 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:19.651 17:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1494958 00:23:27.798 Initializing NVMe Controllers 00:23:27.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:27.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:27.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:27.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:27.798 Initialization complete. Launching workers. 00:23:27.798 ======================================================== 00:23:27.798 Latency(us) 00:23:27.798 Device Information : IOPS MiB/s Average min max 00:23:27.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7026.60 27.45 9113.98 1441.83 52859.94 00:23:27.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7435.90 29.05 8606.98 1417.78 53330.75 00:23:27.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 16749.80 65.43 3830.40 1212.24 44575.06 00:23:27.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7544.20 29.47 8509.14 1374.33 53926.84 00:23:27.798 ======================================================== 00:23:27.798 Total : 38756.50 151.39 6615.51 1212.24 53926.84 00:23:27.798 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.798 rmmod nvme_tcp 00:23:27.798 rmmod nvme_fabrics 00:23:27.798 rmmod nvme_keyring 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1494827 ']' 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1494827 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1494827 ']' 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1494827 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494827 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:27.798 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:27.799 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494827' 00:23:27.799 killing process with pid 1494827 00:23:27.799 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1494827 00:23:27.799 17:02:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1494827 00:23:28.060 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.061 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:31.365 00:23:31.365 real 0m52.997s 00:23:31.365 user 2m49.734s 00:23:31.365 sys 0m10.333s 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.365 ************************************ 00:23:31.365 END TEST nvmf_perf_adq 00:23:31.365 ************************************ 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:31.365 ************************************ 00:23:31.365 START TEST nvmf_shutdown 00:23:31.365 ************************************ 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:31.365 * Looking for test storage... 00:23:31.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:31.365 ************************************ 00:23:31.365 START TEST nvmf_shutdown_tc1 00:23:31.365 ************************************ 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.365 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.366 17:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.572 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:39.573 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:39.573 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:39.573 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:39.573 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.573 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:23:39.573 00:23:39.573 --- 10.0.0.2 ping statistics --- 00:23:39.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.574 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:23:39.574 00:23:39.574 --- 10.0.0.1 ping statistics --- 00:23:39.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.574 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1501408 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1501408 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1501408 ']' 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.574 17:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.574 [2024-07-25 17:02:58.767484] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:39.574 [2024-07-25 17:02:58.767539] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.574 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.574 [2024-07-25 17:02:58.853621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.574 [2024-07-25 17:02:58.948310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.574 [2024-07-25 17:02:58.948373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.574 [2024-07-25 17:02:58.948381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.574 [2024-07-25 17:02:58.948389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.574 [2024-07-25 17:02:58.948395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.574 [2024-07-25 17:02:58.948538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.574 [2024-07-25 17:02:58.948705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.574 [2024-07-25 17:02:58.948871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.574 [2024-07-25 17:02:58.948871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.574 [2024-07-25 17:02:59.601364] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.574 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.574 Malloc1 00:23:39.574 [2024-07-25 17:02:59.704618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.574 Malloc2 00:23:39.574 Malloc3 00:23:39.574 Malloc4 00:23:39.848 Malloc5 00:23:39.848 Malloc6 00:23:39.848 Malloc7 00:23:39.848 Malloc8 00:23:39.848 Malloc9 00:23:39.848 Malloc10 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1501810 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1501810 /var/tmp/bdevperf.sock 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1501810 ']' 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:39.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.848 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.849 { 00:23:39.849 "params": { 00:23:39.849 "name": "Nvme$subsystem", 00:23:39.849 "trtype": "$TEST_TRANSPORT", 00:23:39.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.849 "adrfam": "ipv4", 00:23:39.849 "trsvcid": "$NVMF_PORT", 00:23:39.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.849 "hdgst": ${hdgst:-false}, 00:23:39.849 "ddgst": ${ddgst:-false} 00:23:39.849 }, 00:23:39.849 "method": "bdev_nvme_attach_controller" 00:23:39.849 } 00:23:39.849 EOF 00:23:39.849 )") 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.849 { 00:23:39.849 "params": { 00:23:39.849 "name": "Nvme$subsystem", 00:23:39.849 "trtype": "$TEST_TRANSPORT", 00:23:39.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.849 "adrfam": "ipv4", 00:23:39.849 "trsvcid": "$NVMF_PORT", 00:23:39.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.849 "hdgst": ${hdgst:-false}, 00:23:39.849 "ddgst": ${ddgst:-false} 00:23:39.849 }, 00:23:39.849 "method": "bdev_nvme_attach_controller" 00:23:39.849 } 00:23:39.849 EOF 00:23:39.849 )") 00:23:39.849 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.111 { 00:23:40.111 "params": { 00:23:40.111 "name": "Nvme$subsystem", 00:23:40.111 "trtype": "$TEST_TRANSPORT", 00:23:40.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.111 "adrfam": "ipv4", 00:23:40.111 "trsvcid": "$NVMF_PORT", 00:23:40.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.111 "hdgst": ${hdgst:-false}, 00:23:40.111 "ddgst": ${ddgst:-false} 00:23:40.111 }, 00:23:40.111 "method": "bdev_nvme_attach_controller" 00:23:40.111 } 00:23:40.111 EOF 00:23:40.111 )") 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.111 { 00:23:40.111 "params": { 00:23:40.111 "name": "Nvme$subsystem", 00:23:40.111 "trtype": "$TEST_TRANSPORT", 00:23:40.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.111 "adrfam": "ipv4", 00:23:40.111 "trsvcid": "$NVMF_PORT", 00:23:40.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.111 "hdgst": ${hdgst:-false}, 00:23:40.111 "ddgst": ${ddgst:-false} 00:23:40.111 }, 00:23:40.111 "method": "bdev_nvme_attach_controller" 00:23:40.111 } 00:23:40.111 EOF 00:23:40.111 )") 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.111 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.111 { 00:23:40.111 "params": { 00:23:40.111 "name": "Nvme$subsystem", 00:23:40.111 "trtype": "$TEST_TRANSPORT", 00:23:40.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.111 "adrfam": "ipv4", 00:23:40.111 "trsvcid": "$NVMF_PORT", 00:23:40.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.112 "hdgst": ${hdgst:-false}, 00:23:40.112 "ddgst": ${ddgst:-false} 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 } 00:23:40.112 EOF 00:23:40.112 )") 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.112 { 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme$subsystem", 00:23:40.112 "trtype": "$TEST_TRANSPORT", 00:23:40.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "$NVMF_PORT", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.112 "hdgst": ${hdgst:-false}, 00:23:40.112 "ddgst": ${ddgst:-false} 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 } 00:23:40.112 EOF 00:23:40.112 )") 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.112 [2024-07-25 17:03:00.155112] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:40.112 [2024-07-25 17:03:00.155163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.112 { 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme$subsystem", 00:23:40.112 "trtype": "$TEST_TRANSPORT", 00:23:40.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "$NVMF_PORT", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.112 "hdgst": ${hdgst:-false}, 00:23:40.112 "ddgst": ${ddgst:-false} 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 } 00:23:40.112 EOF 00:23:40.112 )") 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.112 { 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme$subsystem", 00:23:40.112 "trtype": "$TEST_TRANSPORT", 00:23:40.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "$NVMF_PORT", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.112 "hdgst": ${hdgst:-false}, 00:23:40.112 "ddgst": ${ddgst:-false} 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 } 00:23:40.112 EOF 00:23:40.112 )") 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.112 { 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme$subsystem", 00:23:40.112 "trtype": "$TEST_TRANSPORT", 00:23:40.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "$NVMF_PORT", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.112 "hdgst": ${hdgst:-false}, 00:23:40.112 "ddgst": ${ddgst:-false} 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 } 00:23:40.112 EOF 00:23:40.112 )") 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.112 { 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme$subsystem", 00:23:40.112 "trtype": "$TEST_TRANSPORT", 00:23:40.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "$NVMF_PORT", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.112 "hdgst": ${hdgst:-false}, 00:23:40.112 "ddgst": ${ddgst:-false} 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 } 00:23:40.112 EOF 00:23:40.112 )") 00:23:40.112 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:40.112 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme1", 00:23:40.112 "trtype": "tcp", 00:23:40.112 "traddr": "10.0.0.2", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "4420", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.112 "hdgst": false, 00:23:40.112 "ddgst": false 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 },{ 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme2", 00:23:40.112 "trtype": "tcp", 00:23:40.112 "traddr": "10.0.0.2", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "4420", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.112 "hdgst": false, 00:23:40.112 "ddgst": false 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 },{ 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme3", 00:23:40.112 "trtype": "tcp", 00:23:40.112 "traddr": "10.0.0.2", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "4420", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.112 "hdgst": false, 00:23:40.112 "ddgst": false 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 },{ 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme4", 00:23:40.112 "trtype": "tcp", 00:23:40.112 "traddr": "10.0.0.2", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "4420", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.112 "hdgst": false, 00:23:40.112 "ddgst": false 00:23:40.112 }, 00:23:40.112 "method": "bdev_nvme_attach_controller" 00:23:40.112 },{ 00:23:40.112 "params": { 00:23:40.112 "name": "Nvme5", 00:23:40.112 "trtype": "tcp", 00:23:40.112 "traddr": "10.0.0.2", 00:23:40.112 "adrfam": "ipv4", 00:23:40.112 "trsvcid": "4420", 00:23:40.112 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.112 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.113 "hdgst": false, 00:23:40.113 "ddgst": false 00:23:40.113 }, 00:23:40.113 "method": "bdev_nvme_attach_controller" 00:23:40.113 },{ 00:23:40.113 "params": { 00:23:40.113 "name": "Nvme6", 00:23:40.113 "trtype": "tcp", 00:23:40.113 "traddr": "10.0.0.2", 00:23:40.113 "adrfam": "ipv4", 00:23:40.113 "trsvcid": "4420", 00:23:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.113 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.113 "hdgst": false, 00:23:40.113 "ddgst": false 00:23:40.113 }, 00:23:40.113 "method": "bdev_nvme_attach_controller" 00:23:40.113 },{ 00:23:40.113 "params": { 00:23:40.113 "name": "Nvme7", 00:23:40.113 "trtype": "tcp", 00:23:40.113 "traddr": "10.0.0.2", 00:23:40.113 "adrfam": "ipv4", 00:23:40.113 "trsvcid": "4420", 00:23:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.113 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.113 "hdgst": false, 00:23:40.113 "ddgst": false 00:23:40.113 }, 00:23:40.113 "method": "bdev_nvme_attach_controller" 00:23:40.113 },{ 00:23:40.113 "params": { 00:23:40.113 "name": "Nvme8", 00:23:40.113 "trtype": "tcp", 00:23:40.113 "traddr": "10.0.0.2", 00:23:40.113 "adrfam": "ipv4", 00:23:40.113 "trsvcid": "4420", 00:23:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.113 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.113 "hdgst": false, 00:23:40.113 "ddgst": false 00:23:40.113 }, 00:23:40.113 "method": "bdev_nvme_attach_controller" 00:23:40.113 },{ 00:23:40.113 "params": { 00:23:40.113 "name": "Nvme9", 00:23:40.113 "trtype": "tcp", 00:23:40.113 "traddr": "10.0.0.2", 00:23:40.113 "adrfam": "ipv4", 00:23:40.113 "trsvcid": "4420", 00:23:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.113 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.113 "hdgst": false, 00:23:40.113 "ddgst": false 00:23:40.113 }, 00:23:40.113 "method": "bdev_nvme_attach_controller" 00:23:40.113 },{ 00:23:40.113 "params": { 00:23:40.113 "name": "Nvme10", 00:23:40.113 "trtype": "tcp", 00:23:40.113 "traddr": "10.0.0.2", 00:23:40.113 "adrfam": "ipv4", 00:23:40.113 "trsvcid": "4420", 00:23:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.113 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.113 "hdgst": false, 00:23:40.113 "ddgst": false 00:23:40.113 }, 00:23:40.113 "method": "bdev_nvme_attach_controller" 00:23:40.113 }' 00:23:40.113 [2024-07-25 17:03:00.215300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.113 [2024-07-25 17:03:00.280401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1501810 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:41.502 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:42.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1501810 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:42.448 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1501408 00:23:42.448 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:42.448 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:42.448 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:42.448 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 [2024-07-25 17:03:02.668581] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:42.449 [2024-07-25 17:03:02.668636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502332 ] 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.449 "trsvcid": "$NVMF_PORT", 00:23:42.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.449 "hdgst": ${hdgst:-false}, 00:23:42.449 "ddgst": ${ddgst:-false} 00:23:42.449 }, 00:23:42.449 "method": "bdev_nvme_attach_controller" 00:23:42.449 } 00:23:42.449 EOF 00:23:42.449 )") 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.449 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.449 { 00:23:42.449 "params": { 00:23:42.449 "name": "Nvme$subsystem", 00:23:42.449 "trtype": "$TEST_TRANSPORT", 00:23:42.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.449 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "$NVMF_PORT", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.450 "hdgst": ${hdgst:-false}, 00:23:42.450 "ddgst": ${ddgst:-false} 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 } 00:23:42.450 EOF 00:23:42.450 )") 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.450 { 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme$subsystem", 00:23:42.450 "trtype": "$TEST_TRANSPORT", 00:23:42.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "$NVMF_PORT", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.450 "hdgst": ${hdgst:-false}, 00:23:42.450 "ddgst": ${ddgst:-false} 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 } 00:23:42.450 EOF 00:23:42.450 )") 00:23:42.450 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:42.450 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme1", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme2", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme3", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme4", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme5", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme6", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme7", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme8", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme9", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 },{ 00:23:42.450 "params": { 00:23:42.450 "name": "Nvme10", 00:23:42.450 "trtype": "tcp", 00:23:42.450 "traddr": "10.0.0.2", 00:23:42.450 "adrfam": "ipv4", 00:23:42.450 "trsvcid": "4420", 00:23:42.450 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:42.450 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:42.450 "hdgst": false, 00:23:42.450 "ddgst": false 00:23:42.450 }, 00:23:42.450 "method": "bdev_nvme_attach_controller" 00:23:42.450 }' 00:23:42.712 [2024-07-25 17:03:02.729281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.712 [2024-07-25 17:03:02.793706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.096 Running I/O for 1 seconds... 00:23:45.483 00:23:45.483 Latency(us) 00:23:45.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.483 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme1n1 : 1.05 183.15 11.45 0.00 0.00 345796.27 42598.40 304087.04 00:23:45.483 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme2n1 : 1.16 275.38 17.21 0.00 0.00 225026.39 22063.79 232434.35 00:23:45.483 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme3n1 : 1.15 166.89 10.43 0.00 0.00 366724.84 26105.17 339039.57 00:23:45.483 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme4n1 : 1.08 177.32 11.08 0.00 0.00 337842.35 26651.31 279620.27 00:23:45.483 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme5n1 : 1.18 216.07 13.50 0.00 0.00 272824.11 24139.09 281367.89 00:23:45.483 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme6n1 : 1.18 217.56 13.60 0.00 0.00 267051.95 25231.36 286610.77 00:23:45.483 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme7n1 : 1.19 268.06 16.75 0.00 0.00 213146.62 19114.67 239424.85 00:23:45.483 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme8n1 : 1.16 276.65 17.29 0.00 0.00 201778.18 24029.87 235929.60 00:23:45.483 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme9n1 : 1.20 266.79 16.67 0.00 0.00 206557.01 18568.53 263891.63 00:23:45.483 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.483 Verification LBA range: start 0x0 length 0x400 00:23:45.483 Nvme10n1 : 1.18 216.24 13.51 0.00 0.00 249602.99 24903.68 279620.27 00:23:45.483 =================================================================================================================== 00:23:45.483 Total : 2264.10 141.51 0.00 0.00 257110.92 18568.53 339039.57 00:23:45.483 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:45.483 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:45.483 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.484 rmmod nvme_tcp 00:23:45.484 rmmod nvme_fabrics 00:23:45.484 rmmod nvme_keyring 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1501408 ']' 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1501408 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1501408 ']' 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1501408 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501408 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501408' 00:23:45.484 killing process with pid 1501408 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1501408 00:23:45.484 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1501408 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.746 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.297 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.297 00:23:48.297 real 0m16.517s 00:23:48.297 user 0m33.746s 00:23:48.297 sys 0m6.580s 00:23:48.297 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:48.297 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.297 ************************************ 00:23:48.297 END TEST nvmf_shutdown_tc1 00:23:48.297 ************************************ 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:48.297 ************************************ 00:23:48.297 START TEST nvmf_shutdown_tc2 00:23:48.297 ************************************ 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:48.297 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:48.298 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:48.298 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:48.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:48.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.298 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:23:48.299 00:23:48.299 --- 10.0.0.2 ping statistics --- 00:23:48.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.299 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:23:48.299 00:23:48.299 --- 10.0.0.1 ping statistics --- 00:23:48.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.299 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1503701 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1503701 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1503701 ']' 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.299 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.299 [2024-07-25 17:03:08.535801] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:48.299 [2024-07-25 17:03:08.535891] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.561 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.561 [2024-07-25 17:03:08.624359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.561 [2024-07-25 17:03:08.685344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.561 [2024-07-25 17:03:08.685378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.561 [2024-07-25 17:03:08.685383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.561 [2024-07-25 17:03:08.685388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.561 [2024-07-25 17:03:08.685392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.561 [2024-07-25 17:03:08.685651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.561 [2024-07-25 17:03:08.685777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.561 [2024-07-25 17:03:08.686067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.561 [2024-07-25 17:03:08.686067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.137 [2024-07-25 17:03:09.351639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.137 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.400 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.400 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.400 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:49.400 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.400 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.400 Malloc1 00:23:49.400 [2024-07-25 17:03:09.450299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.400 Malloc2 00:23:49.400 Malloc3 00:23:49.400 Malloc4 00:23:49.400 Malloc5 00:23:49.400 Malloc6 00:23:49.400 Malloc7 00:23:49.663 Malloc8 00:23:49.663 Malloc9 00:23:49.663 Malloc10 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1504151 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1504151 /var/tmp/bdevperf.sock 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1504151 ']' 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.663 { 00:23:49.663 "params": { 00:23:49.663 "name": "Nvme$subsystem", 00:23:49.663 "trtype": "$TEST_TRANSPORT", 00:23:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.663 "adrfam": "ipv4", 00:23:49.663 "trsvcid": "$NVMF_PORT", 00:23:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.663 "hdgst": ${hdgst:-false}, 00:23:49.663 "ddgst": ${ddgst:-false} 00:23:49.663 }, 00:23:49.663 "method": "bdev_nvme_attach_controller" 00:23:49.663 } 00:23:49.663 EOF 00:23:49.663 )") 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.663 { 00:23:49.663 "params": { 00:23:49.663 "name": "Nvme$subsystem", 00:23:49.663 "trtype": "$TEST_TRANSPORT", 00:23:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.663 "adrfam": "ipv4", 00:23:49.663 "trsvcid": "$NVMF_PORT", 00:23:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.663 "hdgst": ${hdgst:-false}, 00:23:49.663 "ddgst": ${ddgst:-false} 00:23:49.663 }, 00:23:49.663 "method": "bdev_nvme_attach_controller" 00:23:49.663 } 00:23:49.663 EOF 00:23:49.663 )") 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.663 { 00:23:49.663 "params": { 00:23:49.663 "name": "Nvme$subsystem", 00:23:49.663 "trtype": "$TEST_TRANSPORT", 00:23:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.663 "adrfam": "ipv4", 00:23:49.663 "trsvcid": "$NVMF_PORT", 00:23:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.663 "hdgst": ${hdgst:-false}, 00:23:49.663 "ddgst": ${ddgst:-false} 00:23:49.663 }, 00:23:49.663 "method": "bdev_nvme_attach_controller" 00:23:49.663 } 00:23:49.663 EOF 00:23:49.663 )") 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.663 { 00:23:49.663 "params": { 00:23:49.663 "name": "Nvme$subsystem", 00:23:49.663 "trtype": "$TEST_TRANSPORT", 00:23:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.663 "adrfam": "ipv4", 00:23:49.663 "trsvcid": "$NVMF_PORT", 00:23:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.663 "hdgst": ${hdgst:-false}, 00:23:49.663 "ddgst": ${ddgst:-false} 00:23:49.663 }, 00:23:49.663 "method": "bdev_nvme_attach_controller" 00:23:49.663 } 00:23:49.663 EOF 00:23:49.663 )") 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.663 { 00:23:49.663 "params": { 00:23:49.663 "name": "Nvme$subsystem", 00:23:49.663 "trtype": "$TEST_TRANSPORT", 00:23:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.663 "adrfam": "ipv4", 00:23:49.663 "trsvcid": "$NVMF_PORT", 00:23:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.663 "hdgst": ${hdgst:-false}, 00:23:49.663 "ddgst": ${ddgst:-false} 00:23:49.663 }, 00:23:49.663 "method": "bdev_nvme_attach_controller" 00:23:49.663 } 00:23:49.663 EOF 00:23:49.663 )") 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.663 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.663 { 00:23:49.663 "params": { 00:23:49.663 "name": "Nvme$subsystem", 00:23:49.663 "trtype": "$TEST_TRANSPORT", 00:23:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.663 "adrfam": "ipv4", 00:23:49.663 "trsvcid": "$NVMF_PORT", 00:23:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.663 "hdgst": ${hdgst:-false}, 00:23:49.663 "ddgst": ${ddgst:-false} 00:23:49.663 }, 00:23:49.663 "method": "bdev_nvme_attach_controller" 00:23:49.663 } 00:23:49.663 EOF 00:23:49.663 )") 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.664 { 00:23:49.664 "params": { 00:23:49.664 "name": "Nvme$subsystem", 00:23:49.664 "trtype": "$TEST_TRANSPORT", 00:23:49.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.664 "adrfam": "ipv4", 00:23:49.664 "trsvcid": "$NVMF_PORT", 00:23:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.664 "hdgst": ${hdgst:-false}, 00:23:49.664 "ddgst": ${ddgst:-false} 00:23:49.664 }, 00:23:49.664 "method": "bdev_nvme_attach_controller" 00:23:49.664 } 00:23:49.664 EOF 00:23:49.664 )") 00:23:49.664 [2024-07-25 17:03:09.908308] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:49.664 [2024-07-25 17:03:09.908363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504151 ] 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.664 { 00:23:49.664 "params": { 00:23:49.664 "name": "Nvme$subsystem", 00:23:49.664 "trtype": "$TEST_TRANSPORT", 00:23:49.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.664 "adrfam": "ipv4", 00:23:49.664 "trsvcid": "$NVMF_PORT", 00:23:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.664 "hdgst": ${hdgst:-false}, 00:23:49.664 "ddgst": ${ddgst:-false} 00:23:49.664 }, 00:23:49.664 "method": "bdev_nvme_attach_controller" 00:23:49.664 } 00:23:49.664 EOF 00:23:49.664 )") 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.664 { 00:23:49.664 "params": { 00:23:49.664 "name": "Nvme$subsystem", 00:23:49.664 "trtype": "$TEST_TRANSPORT", 00:23:49.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.664 "adrfam": "ipv4", 00:23:49.664 "trsvcid": "$NVMF_PORT", 00:23:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.664 "hdgst": ${hdgst:-false}, 00:23:49.664 "ddgst": ${ddgst:-false} 00:23:49.664 }, 00:23:49.664 "method": "bdev_nvme_attach_controller" 00:23:49.664 } 00:23:49.664 EOF 00:23:49.664 )") 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.664 { 00:23:49.664 "params": { 00:23:49.664 "name": "Nvme$subsystem", 00:23:49.664 "trtype": "$TEST_TRANSPORT", 00:23:49.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.664 "adrfam": "ipv4", 00:23:49.664 "trsvcid": "$NVMF_PORT", 00:23:49.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.664 "hdgst": ${hdgst:-false}, 00:23:49.664 "ddgst": ${ddgst:-false} 00:23:49.664 }, 00:23:49.664 "method": "bdev_nvme_attach_controller" 00:23:49.664 } 00:23:49.664 EOF 00:23:49.664 )") 00:23:49.664 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.664 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.927 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:49.927 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:49.927 17:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme1", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme2", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme3", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme4", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme5", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme6", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme7", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme8", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme9", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.927 },{ 00:23:49.927 "params": { 00:23:49.927 "name": "Nvme10", 00:23:49.927 "trtype": "tcp", 00:23:49.927 "traddr": "10.0.0.2", 00:23:49.927 "adrfam": "ipv4", 00:23:49.927 "trsvcid": "4420", 00:23:49.927 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:49.927 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:49.927 "hdgst": false, 00:23:49.927 "ddgst": false 00:23:49.927 }, 00:23:49.927 "method": "bdev_nvme_attach_controller" 00:23:49.928 }' 00:23:49.928 [2024-07-25 17:03:09.968714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.928 [2024-07-25 17:03:10.036800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.316 Running I/O for 10 seconds... 00:23:51.316 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.316 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:51.316 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:51.316 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.316 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:51.578 17:03:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1504151 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1504151 ']' 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1504151 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504151 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504151' 00:23:51.840 killing process with pid 1504151 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1504151 00:23:51.840 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1504151 00:23:52.101 Received shutdown signal, test time was about 0.726285 seconds 00:23:52.102 00:23:52.102 Latency(us) 00:23:52.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.102 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme1n1 : 0.66 291.60 18.23 0.00 0.00 215756.80 23483.73 242920.11 00:23:52.102 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme2n1 : 0.67 189.91 11.87 0.00 0.00 321822.72 25449.81 265639.25 00:23:52.102 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme3n1 : 0.65 99.01 6.19 0.00 0.00 589837.65 159907.84 436906.67 00:23:52.102 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme4n1 : 0.72 174.28 10.89 0.00 0.00 330577.11 14417.92 379234.99 00:23:52.102 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme5n1 : 0.69 185.23 11.58 0.00 0.00 301314.13 15947.09 339039.57 00:23:52.102 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme6n1 : 0.73 176.46 11.03 0.00 0.00 308685.65 17476.27 394963.63 00:23:52.102 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme7n1 : 0.64 430.01 26.88 0.00 0.00 117581.86 4614.83 144179.20 00:23:52.102 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme8n1 : 0.67 288.20 18.01 0.00 0.00 172935.40 23702.19 178257.92 00:23:52.102 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme9n1 : 0.68 189.30 11.83 0.00 0.00 253133.65 42598.40 228939.09 00:23:52.102 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.102 Verification LBA range: start 0x0 length 0x400 00:23:52.102 Nvme10n1 : 0.63 204.10 12.76 0.00 0.00 221304.32 28398.93 253405.87 00:23:52.102 =================================================================================================================== 00:23:52.102 Total : 2228.09 139.26 0.00 0.00 246049.42 4614.83 436906.67 00:23:52.363 17:03:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1503701 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.319 rmmod nvme_tcp 00:23:53.319 rmmod nvme_fabrics 00:23:53.319 rmmod nvme_keyring 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1503701 ']' 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1503701 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1503701 ']' 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1503701 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503701 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503701' 00:23:53.319 killing process with pid 1503701 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1503701 00:23:53.319 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1503701 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.581 17:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.134 00:23:56.134 real 0m7.799s 00:23:56.134 user 0m22.870s 00:23:56.134 sys 0m1.298s 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.134 ************************************ 00:23:56.134 END TEST nvmf_shutdown_tc2 00:23:56.134 ************************************ 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:56.134 ************************************ 00:23:56.134 START TEST nvmf_shutdown_tc3 00:23:56.134 ************************************ 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.134 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.135 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.135 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.135 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.135 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.135 17:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:23:56.135 00:23:56.135 --- 10.0.0.2 ping statistics --- 00:23:56.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.135 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:23:56.135 00:23:56.135 --- 10.0.0.1 ping statistics --- 00:23:56.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.135 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:56.135 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1505685 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1505685 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1505685 ']' 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.136 17:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.136 [2024-07-25 17:03:16.380344] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:56.136 [2024-07-25 17:03:16.380411] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.398 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.398 [2024-07-25 17:03:16.468071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.398 [2024-07-25 17:03:16.530824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.398 [2024-07-25 17:03:16.530854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.398 [2024-07-25 17:03:16.530859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.398 [2024-07-25 17:03:16.530864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.398 [2024-07-25 17:03:16.530868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.398 [2024-07-25 17:03:16.530986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.398 [2024-07-25 17:03:16.531149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.398 [2024-07-25 17:03:16.531290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:56.398 [2024-07-25 17:03:16.531514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.971 [2024-07-25 17:03:17.208891] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.971 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.972 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.233 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.233 Malloc1 00:23:57.233 [2024-07-25 17:03:17.307642] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.233 Malloc2 00:23:57.233 Malloc3 00:23:57.233 Malloc4 00:23:57.233 Malloc5 00:23:57.233 Malloc6 00:23:57.496 Malloc7 00:23:57.496 Malloc8 00:23:57.496 Malloc9 00:23:57.496 Malloc10 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1506074 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1506074 /var/tmp/bdevperf.sock 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1506074 ']' 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.496 { 00:23:57.496 "params": { 00:23:57.496 "name": "Nvme$subsystem", 00:23:57.496 "trtype": "$TEST_TRANSPORT", 00:23:57.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.496 "adrfam": "ipv4", 00:23:57.496 "trsvcid": "$NVMF_PORT", 00:23:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.496 "hdgst": ${hdgst:-false}, 00:23:57.496 "ddgst": ${ddgst:-false} 00:23:57.496 }, 00:23:57.496 "method": "bdev_nvme_attach_controller" 00:23:57.496 } 00:23:57.496 EOF 00:23:57.496 )") 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.496 { 00:23:57.496 "params": { 00:23:57.496 "name": "Nvme$subsystem", 00:23:57.496 "trtype": "$TEST_TRANSPORT", 00:23:57.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.496 "adrfam": "ipv4", 00:23:57.496 "trsvcid": "$NVMF_PORT", 00:23:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.496 "hdgst": ${hdgst:-false}, 00:23:57.496 "ddgst": ${ddgst:-false} 00:23:57.496 }, 00:23:57.496 "method": "bdev_nvme_attach_controller" 00:23:57.496 } 00:23:57.496 EOF 00:23:57.496 )") 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.496 { 00:23:57.496 "params": { 00:23:57.496 "name": "Nvme$subsystem", 00:23:57.496 "trtype": "$TEST_TRANSPORT", 00:23:57.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.496 "adrfam": "ipv4", 00:23:57.496 "trsvcid": "$NVMF_PORT", 00:23:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.496 "hdgst": ${hdgst:-false}, 00:23:57.496 "ddgst": ${ddgst:-false} 00:23:57.496 }, 00:23:57.496 "method": "bdev_nvme_attach_controller" 00:23:57.496 } 00:23:57.496 EOF 00:23:57.496 )") 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.496 { 00:23:57.496 "params": { 00:23:57.496 "name": "Nvme$subsystem", 00:23:57.496 "trtype": "$TEST_TRANSPORT", 00:23:57.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.496 "adrfam": "ipv4", 00:23:57.496 "trsvcid": "$NVMF_PORT", 00:23:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.496 "hdgst": ${hdgst:-false}, 00:23:57.496 "ddgst": ${ddgst:-false} 00:23:57.496 }, 00:23:57.496 "method": "bdev_nvme_attach_controller" 00:23:57.496 } 00:23:57.496 EOF 00:23:57.496 )") 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.496 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.496 { 00:23:57.496 "params": { 00:23:57.496 "name": "Nvme$subsystem", 00:23:57.496 "trtype": "$TEST_TRANSPORT", 00:23:57.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.496 "adrfam": "ipv4", 00:23:57.496 "trsvcid": "$NVMF_PORT", 00:23:57.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.496 "hdgst": ${hdgst:-false}, 00:23:57.497 "ddgst": ${ddgst:-false} 00:23:57.497 }, 00:23:57.497 "method": "bdev_nvme_attach_controller" 00:23:57.497 } 00:23:57.497 EOF 00:23:57.497 )") 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.497 { 00:23:57.497 "params": { 00:23:57.497 "name": "Nvme$subsystem", 00:23:57.497 "trtype": "$TEST_TRANSPORT", 00:23:57.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.497 "adrfam": "ipv4", 00:23:57.497 "trsvcid": "$NVMF_PORT", 00:23:57.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.497 "hdgst": ${hdgst:-false}, 00:23:57.497 "ddgst": ${ddgst:-false} 00:23:57.497 }, 00:23:57.497 "method": "bdev_nvme_attach_controller" 00:23:57.497 } 00:23:57.497 EOF 00:23:57.497 )") 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.497 [2024-07-25 17:03:17.746982] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:23:57.497 [2024-07-25 17:03:17.747046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506074 ] 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.497 { 00:23:57.497 "params": { 00:23:57.497 "name": "Nvme$subsystem", 00:23:57.497 "trtype": "$TEST_TRANSPORT", 00:23:57.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.497 "adrfam": "ipv4", 00:23:57.497 "trsvcid": "$NVMF_PORT", 00:23:57.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.497 "hdgst": ${hdgst:-false}, 00:23:57.497 "ddgst": ${ddgst:-false} 00:23:57.497 }, 00:23:57.497 "method": "bdev_nvme_attach_controller" 00:23:57.497 } 00:23:57.497 EOF 00:23:57.497 )") 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.497 { 00:23:57.497 "params": { 00:23:57.497 "name": "Nvme$subsystem", 00:23:57.497 "trtype": "$TEST_TRANSPORT", 00:23:57.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.497 "adrfam": "ipv4", 00:23:57.497 "trsvcid": "$NVMF_PORT", 00:23:57.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.497 "hdgst": ${hdgst:-false}, 00:23:57.497 "ddgst": ${ddgst:-false} 00:23:57.497 }, 00:23:57.497 "method": "bdev_nvme_attach_controller" 00:23:57.497 } 00:23:57.497 EOF 00:23:57.497 )") 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.497 { 00:23:57.497 "params": { 00:23:57.497 "name": "Nvme$subsystem", 00:23:57.497 "trtype": "$TEST_TRANSPORT", 00:23:57.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.497 "adrfam": "ipv4", 00:23:57.497 "trsvcid": "$NVMF_PORT", 00:23:57.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.497 "hdgst": ${hdgst:-false}, 00:23:57.497 "ddgst": ${ddgst:-false} 00:23:57.497 }, 00:23:57.497 "method": "bdev_nvme_attach_controller" 00:23:57.497 } 00:23:57.497 EOF 00:23:57.497 )") 00:23:57.497 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.759 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.759 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.759 { 00:23:57.759 "params": { 00:23:57.759 "name": "Nvme$subsystem", 00:23:57.759 "trtype": "$TEST_TRANSPORT", 00:23:57.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.759 "adrfam": "ipv4", 00:23:57.759 "trsvcid": "$NVMF_PORT", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.759 "hdgst": ${hdgst:-false}, 00:23:57.759 "ddgst": ${ddgst:-false} 00:23:57.759 }, 00:23:57.759 "method": "bdev_nvme_attach_controller" 00:23:57.759 } 00:23:57.759 EOF 00:23:57.759 )") 00:23:57.759 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.759 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:57.759 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:57.759 17:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.759 "params": { 00:23:57.759 "name": "Nvme1", 00:23:57.759 "trtype": "tcp", 00:23:57.759 "traddr": "10.0.0.2", 00:23:57.759 "adrfam": "ipv4", 00:23:57.759 "trsvcid": "4420", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.759 "hdgst": false, 00:23:57.759 "ddgst": false 00:23:57.759 }, 00:23:57.759 "method": "bdev_nvme_attach_controller" 00:23:57.759 },{ 00:23:57.759 "params": { 00:23:57.759 "name": "Nvme2", 00:23:57.759 "trtype": "tcp", 00:23:57.759 "traddr": "10.0.0.2", 00:23:57.759 "adrfam": "ipv4", 00:23:57.759 "trsvcid": "4420", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:57.759 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:57.759 "hdgst": false, 00:23:57.759 "ddgst": false 00:23:57.759 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme3", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme4", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme5", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme6", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme7", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme8", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme9", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 },{ 00:23:57.760 "params": { 00:23:57.760 "name": "Nvme10", 00:23:57.760 "trtype": "tcp", 00:23:57.760 "traddr": "10.0.0.2", 00:23:57.760 "adrfam": "ipv4", 00:23:57.760 "trsvcid": "4420", 00:23:57.760 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:57.760 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:57.760 "hdgst": false, 00:23:57.760 "ddgst": false 00:23:57.760 }, 00:23:57.760 "method": "bdev_nvme_attach_controller" 00:23:57.760 }' 00:23:57.760 [2024-07-25 17:03:17.814560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.760 [2024-07-25 17:03:17.879327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.149 Running I/O for 10 seconds... 00:23:59.149 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.149 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:59.149 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:59.149 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.149 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:59.411 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.681 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=129 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1505685 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1505685 ']' 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1505685 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505685 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505685' 00:23:59.682 killing process with pid 1505685 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1505685 00:23:59.682 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1505685 00:23:59.682 [2024-07-25 17:03:19.896909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.896998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1dfe0 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.897944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fc10 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.899150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4da90 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.899637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4df50 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.682 [2024-07-25 17:03:19.900177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.900428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e410 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.683 [2024-07-25 17:03:19.901529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.901605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4e8f0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4edb0 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.902995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f270 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.684 [2024-07-25 17:03:19.903738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.903997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.904001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f730 is same with the state(5) to be set 00:23:59.685 [2024-07-25 17:03:19.913104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.685 [2024-07-25 17:03:19.913375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.685 [2024-07-25 17:03:19.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.686 [2024-07-25 17:03:19.913853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.686 [2024-07-25 17:03:19.913860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.913985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.913992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.687 [2024-07-25 17:03:19.914209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.914932] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1006090 was disconnected and freed. reset controller. 00:23:59.687 [2024-07-25 17:03:19.915009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1100480 is same with the state(5) to be set 00:23:59.687 [2024-07-25 17:03:19.915105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f57c0 is same with the state(5) to be set 00:23:59.687 [2024-07-25 17:03:19.915194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10de2c0 is same with the state(5) to be set 00:23:59.687 [2024-07-25 17:03:19.915285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ddea0 is same with the state(5) to be set 00:23:59.687 [2024-07-25 17:03:19.915370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.687 [2024-07-25 17:03:19.915423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.687 [2024-07-25 17:03:19.915430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d05e0 is same with the state(5) to be set 00:23:59.688 [2024-07-25 17:03:19.915450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47e30 is same with the state(5) to be set 00:23:59.688 [2024-07-25 17:03:19.915532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6a780 is same with the state(5) to be set 00:23:59.688 [2024-07-25 17:03:19.915616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106250 is same with the state(5) to be set 00:23:59.688 [2024-07-25 17:03:19.915697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5ecf0 is same with the state(5) to be set 00:23:59.688 [2024-07-25 17:03:19.915778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.688 [2024-07-25 17:03:19.915832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a5d0 is same with the state(5) to be set 00:23:59.688 [2024-07-25 17:03:19.915923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.915933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.915953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.915970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.915986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.915995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.688 [2024-07-25 17:03:19.916267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.688 [2024-07-25 17:03:19.916276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.689 [2024-07-25 17:03:19.916966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.689 [2024-07-25 17:03:19.916974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108b480 is same with the state(5) to be set 00:23:59.690 [2024-07-25 17:03:19.917016] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x108b480 was disconnected and freed. reset controller. 00:23:59.690 [2024-07-25 17:03:19.917092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.690 [2024-07-25 17:03:19.917280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.690 [2024-07-25 17:03:19.917287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.917296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.917302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.691 [2024-07-25 17:03:19.925584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.691 [2024-07-25 17:03:19.925593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.925954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.925960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926028] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x108c930 was disconnected and freed. reset controller. 00:23:59.692 [2024-07-25 17:03:19.926126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.692 [2024-07-25 17:03:19.926446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.692 [2024-07-25 17:03:19.926453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.926990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.926997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.693 [2024-07-25 17:03:19.927144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.693 [2024-07-25 17:03:19.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.927159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.927169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.927176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.927185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.927192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.927247] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf33af0 was disconnected and freed. reset controller. 00:23:59.694 [2024-07-25 17:03:19.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.945992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.945999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.946008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.946015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.946024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.946031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.694 [2024-07-25 17:03:19.946041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.694 [2024-07-25 17:03:19.946048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.946774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.946839] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1867e00 was disconnected and freed. reset controller. 00:23:59.981 [2024-07-25 17:03:19.948184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1100480 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f57c0 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10de2c0 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ddea0 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d05e0 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf47e30 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a780 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106250 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5ecf0 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.948342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3a5d0 (9): Bad file descriptor 00:23:59.981 [2024-07-25 17:03:19.951992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.952029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.952049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.952069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.952088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.952107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.981 [2024-07-25 17:03:19.952126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.981 [2024-07-25 17:03:19.952135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.952986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.952994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.953027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.953045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.953062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.953078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.982 [2024-07-25 17:03:19.953094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.982 [2024-07-25 17:03:19.953154] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf34fa0 was disconnected and freed. reset controller. 00:23:59.982 [2024-07-25 17:03:19.954491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:59.982 [2024-07-25 17:03:19.956204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:59.982 [2024-07-25 17:03:19.956230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:59.982 [2024-07-25 17:03:19.956240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:59.982 [2024-07-25 17:03:19.956821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.983 [2024-07-25 17:03:19.956861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d05e0 with addr=10.0.0.2, port=4420 00:23:59.983 [2024-07-25 17:03:19.956874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d05e0 is same with the state(5) to be set 00:23:59.983 [2024-07-25 17:03:19.956928] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.983 [2024-07-25 17:03:19.957795] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.983 [2024-07-25 17:03:19.958096] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.983 [2024-07-25 17:03:19.958139] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.983 [2024-07-25 17:03:19.958164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:59.983 [2024-07-25 17:03:19.958178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:59.983 [2024-07-25 17:03:19.958803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.983 [2024-07-25 17:03:19.958842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1106250 with addr=10.0.0.2, port=4420 00:23:59.983 [2024-07-25 17:03:19.958853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106250 is same with the state(5) to be set 00:23:59.983 [2024-07-25 17:03:19.959060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.983 [2024-07-25 17:03:19.959071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf47e30 with addr=10.0.0.2, port=4420 00:23:59.983 [2024-07-25 17:03:19.959079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47e30 is same with the state(5) to be set 00:23:59.983 [2024-07-25 17:03:19.959402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.983 [2024-07-25 17:03:19.959439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6a780 with addr=10.0.0.2, port=4420 00:23:59.983 [2024-07-25 17:03:19.959453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6a780 is same with the state(5) to be set 00:23:59.983 [2024-07-25 17:03:19.959476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d05e0 (9): Bad file descriptor 00:23:59.983 [2024-07-25 17:03:19.959548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.959982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.959993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.983 [2024-07-25 17:03:19.960400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.983 [2024-07-25 17:03:19.960407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.960639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.960647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cb60 is same with the state(5) to be set 00:23:59.984 [2024-07-25 17:03:19.960707] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x100cb60 was disconnected and freed. reset controller. 00:23:59.984 [2024-07-25 17:03:19.960715] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.984 [2024-07-25 17:03:19.961579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.984 [2024-07-25 17:03:19.961596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ddea0 with addr=10.0.0.2, port=4420 00:23:59.984 [2024-07-25 17:03:19.961605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ddea0 is same with the state(5) to be set 00:23:59.984 [2024-07-25 17:03:19.962097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.984 [2024-07-25 17:03:19.962106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5ecf0 with addr=10.0.0.2, port=4420 00:23:59.984 [2024-07-25 17:03:19.962114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5ecf0 is same with the state(5) to be set 00:23:59.984 [2024-07-25 17:03:19.962124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106250 (9): Bad file descriptor 00:23:59.984 [2024-07-25 17:03:19.962134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf47e30 (9): Bad file descriptor 00:23:59.984 [2024-07-25 17:03:19.962143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a780 (9): Bad file descriptor 00:23:59.984 [2024-07-25 17:03:19.962152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:59.984 [2024-07-25 17:03:19.962159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:59.984 [2024-07-25 17:03:19.962168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:59.984 [2024-07-25 17:03:19.963464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.963988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.963997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.984 [2024-07-25 17:03:19.964195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.984 [2024-07-25 17:03:19.964209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.964556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.964564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf36450 is same with the state(5) to be set 00:23:59.985 [2024-07-25 17:03:19.965845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.985 [2024-07-25 17:03:19.966826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.985 [2024-07-25 17:03:19.966835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.966985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.966992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.967140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.967148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0f870 is same with the state(5) to be set 00:23:59.986 [2024-07-25 17:03:19.968419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.968985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.968994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.969001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.969011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.969017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.969027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.969044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.969051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.969060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.969067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.969076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.986 [2024-07-25 17:03:19.969086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.986 [2024-07-25 17:03:19.969096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.987 [2024-07-25 17:03:19.969503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.987 [2024-07-25 17:03:19.969511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004b90 is same with the state(5) to be set 00:23:59.987 [2024-07-25 17:03:19.971582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.987 [2024-07-25 17:03:19.971603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.987 [2024-07-25 17:03:19.971614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:59.987 [2024-07-25 17:03:19.971624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:59.987 task offset: 8576 on job bdev=Nvme10n1 fails 00:23:59.987 00:23:59.987 Latency(us) 00:23:59.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.987 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme1n1 ended in about 0.67 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme1n1 : 0.67 189.93 11.87 94.97 0.00 221141.05 13981.01 200977.07 00:23:59.987 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme2n1 ended in about 0.66 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme2n1 : 0.66 193.89 12.12 96.95 0.00 210149.26 36700.16 251658.24 00:23:59.987 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme3n1 ended in about 0.66 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme3n1 : 0.66 193.54 12.10 96.77 0.00 204041.39 24139.09 198355.63 00:23:59.987 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme4n1 ended in about 0.66 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme4n1 : 0.66 96.60 6.04 96.60 0.00 297131.52 38447.79 332049.07 00:23:59.987 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme5n1 ended in about 0.67 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme5n1 : 0.67 189.10 11.82 96.05 0.00 194816.97 9338.88 279620.27 00:23:59.987 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme6n1 ended in about 0.68 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme6n1 : 0.68 189.27 11.83 94.63 0.00 189668.12 17148.59 209715.20 00:23:59.987 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme7n1 ended in about 0.66 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme7n1 : 0.66 96.25 6.02 96.25 0.00 269323.09 40413.87 353020.59 00:23:59.987 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme8n1 ended in about 0.68 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme8n1 : 0.68 94.27 5.89 94.27 0.00 266653.01 44346.03 246415.36 00:23:59.987 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme9n1 ended in about 0.68 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme9n1 : 0.68 93.95 5.87 93.95 0.00 258312.53 24685.23 248162.99 00:23:59.987 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.987 Job: Nvme10n1 ended in about 0.66 seconds with error 00:23:59.987 Verification LBA range: start 0x0 length 0x400 00:23:59.987 Nvme10n1 : 0.66 97.17 6.07 97.17 0.00 237605.55 47404.37 349525.33 00:23:59.987 =================================================================================================================== 00:23:59.987 Total : 1433.98 89.62 957.61 0.00 228742.48 9338.88 353020.59 00:23:59.987 [2024-07-25 17:03:19.998381] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:59.987 [2024-07-25 17:03:19.998452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ddea0 (9): Bad file descriptor 00:23:59.987 [2024-07-25 17:03:19.998466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5ecf0 (9): Bad file descriptor 00:23:59.987 [2024-07-25 17:03:19.998476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:59.987 [2024-07-25 17:03:19.998482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:59.987 [2024-07-25 17:03:19.998491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:59.987 [2024-07-25 17:03:19.998505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:59.987 [2024-07-25 17:03:19.998512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:59.987 [2024-07-25 17:03:19.998518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:59.987 [2024-07-25 17:03:19.998530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:59.987 [2024-07-25 17:03:19.998538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:59.987 [2024-07-25 17:03:19.998545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:59.987 [2024-07-25 17:03:19.998581] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.987 [2024-07-25 17:03:19.998592] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.987 [2024-07-25 17:03:19.998603] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.987 [2024-07-25 17:03:19.998616] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.987 [2024-07-25 17:03:19.998628] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.987 [2024-07-25 17:03:19.998639] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.987 [2024-07-25 17:03:19.998718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:59.987 [2024-07-25 17:03:19.998740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.987 [2024-07-25 17:03:19.998747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.987 [2024-07-25 17:03:19.998754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.987 [2024-07-25 17:03:19.999339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.987 [2024-07-25 17:03:19.999355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf3a5d0 with addr=10.0.0.2, port=4420 00:23:59.987 [2024-07-25 17:03:19.999364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a5d0 is same with the state(5) to be set 00:23:59.987 [2024-07-25 17:03:19.999692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.987 [2024-07-25 17:03:19.999702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10de2c0 with addr=10.0.0.2, port=4420 00:23:59.987 [2024-07-25 17:03:19.999709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10de2c0 is same with the state(5) to be set 00:23:59.987 [2024-07-25 17:03:20.000208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.988 [2024-07-25 17:03:20.000218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1100480 with addr=10.0.0.2, port=4420 00:23:59.988 [2024-07-25 17:03:20.000230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1100480 is same with the state(5) to be set 00:23:59.988 [2024-07-25 17:03:20.000238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.000244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.000250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:59.988 [2024-07-25 17:03:20.000262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.000268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.000275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:59.988 [2024-07-25 17:03:20.000294] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.988 [2024-07-25 17:03:20.000323] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.988 [2024-07-25 17:03:20.000334] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:59.988 [2024-07-25 17:03:20.001883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:59.988 [2024-07-25 17:03:20.001913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.001920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.002398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.988 [2024-07-25 17:03:20.002411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f57c0 with addr=10.0.0.2, port=4420 00:23:59.988 [2024-07-25 17:03:20.002418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f57c0 is same with the state(5) to be set 00:23:59.988 [2024-07-25 17:03:20.002428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3a5d0 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.002438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10de2c0 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.002447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1100480 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.002510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:59.988 [2024-07-25 17:03:20.002521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:59.988 [2024-07-25 17:03:20.002530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:59.988 [2024-07-25 17:03:20.003087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.988 [2024-07-25 17:03:20.003099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d05e0 with addr=10.0.0.2, port=4420 00:23:59.988 [2024-07-25 17:03:20.003105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d05e0 is same with the state(5) to be set 00:23:59.988 [2024-07-25 17:03:20.003114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f57c0 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.003122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.003128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.003135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.988 [2024-07-25 17:03:20.003146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.003156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.003163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:59.988 [2024-07-25 17:03:20.003173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.003179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.003186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:59.988 [2024-07-25 17:03:20.003246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.003254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.003260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.003678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.988 [2024-07-25 17:03:20.003689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6a780 with addr=10.0.0.2, port=4420 00:23:59.988 [2024-07-25 17:03:20.003696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6a780 is same with the state(5) to be set 00:23:59.988 [2024-07-25 17:03:20.004209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.988 [2024-07-25 17:03:20.004220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf47e30 with addr=10.0.0.2, port=4420 00:23:59.988 [2024-07-25 17:03:20.004227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47e30 is same with the state(5) to be set 00:23:59.988 [2024-07-25 17:03:20.004707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.988 [2024-07-25 17:03:20.004717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1106250 with addr=10.0.0.2, port=4420 00:23:59.988 [2024-07-25 17:03:20.004724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106250 is same with the state(5) to be set 00:23:59.988 [2024-07-25 17:03:20.004733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d05e0 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.004742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.004748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.004756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:59.988 [2024-07-25 17:03:20.004797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.004806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a780 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.004816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf47e30 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.004826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106250 (9): Bad file descriptor 00:23:59.988 [2024-07-25 17:03:20.004834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.004842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.004850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:59.988 [2024-07-25 17:03:20.004877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.004886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.004897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.004905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:59.988 [2024-07-25 17:03:20.004916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.004924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.004932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:59.988 [2024-07-25 17:03:20.004943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:59.988 [2024-07-25 17:03:20.004949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:59.988 [2024-07-25 17:03:20.004957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:59.988 [2024-07-25 17:03:20.004985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.004992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 [2024-07-25 17:03:20.004997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.988 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:59.988 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1506074 00:24:00.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1506074) - No such process 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.932 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.932 rmmod nvme_tcp 00:24:01.194 rmmod nvme_fabrics 00:24:01.194 rmmod nvme_keyring 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.194 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.112 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.112 00:24:03.112 real 0m7.397s 00:24:03.112 user 0m16.926s 00:24:03.112 sys 0m1.225s 00:24:03.112 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.112 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:03.112 ************************************ 00:24:03.112 END TEST nvmf_shutdown_tc3 00:24:03.112 ************************************ 00:24:03.112 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:03.112 00:24:03.112 real 0m32.093s 00:24:03.112 user 1m13.700s 00:24:03.112 sys 0m9.348s 00:24:03.112 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.112 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:03.112 ************************************ 00:24:03.112 END TEST nvmf_shutdown 00:24:03.112 ************************************ 00:24:03.374 17:03:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:24:03.374 00:24:03.374 real 11m28.722s 00:24:03.374 user 24m30.954s 00:24:03.374 sys 3m24.036s 00:24:03.374 17:03:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.374 17:03:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:03.374 ************************************ 00:24:03.374 END TEST nvmf_target_extra 00:24:03.374 ************************************ 00:24:03.374 17:03:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:03.374 17:03:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:03.374 17:03:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.374 17:03:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:03.374 ************************************ 00:24:03.374 START TEST nvmf_host 00:24:03.374 ************************************ 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:03.374 * Looking for test storage... 00:24:03.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:03.374 17:03:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.375 17:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.638 ************************************ 00:24:03.638 START TEST nvmf_multicontroller 00:24:03.638 ************************************ 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:03.638 * Looking for test storage... 00:24:03.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.638 17:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:11.786 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:11.786 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.786 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:11.787 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:11.787 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:24:11.787 00:24:11.787 --- 10.0.0.2 ping statistics --- 00:24:11.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.787 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:24:11.787 00:24:11.787 --- 10.0.0.1 ping statistics --- 00:24:11.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.787 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1510907 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1510907 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1510907 ']' 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.787 17:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.787 [2024-07-25 17:03:30.994286] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:11.787 [2024-07-25 17:03:30.994355] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.787 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.787 [2024-07-25 17:03:31.083743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:11.787 [2024-07-25 17:03:31.175279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.787 [2024-07-25 17:03:31.175341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.787 [2024-07-25 17:03:31.175349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.787 [2024-07-25 17:03:31.175356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.787 [2024-07-25 17:03:31.175362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.787 [2024-07-25 17:03:31.175494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.787 [2024-07-25 17:03:31.175661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.787 [2024-07-25 17:03:31.175662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.787 [2024-07-25 17:03:31.820680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:11.787 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 Malloc0 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 [2024-07-25 17:03:31.883233] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 [2024-07-25 17:03:31.895164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 Malloc1 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1511185 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1511185 /var/tmp/bdevperf.sock 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1511185 ']' 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.788 17:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.733 NVMe0n1 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.733 17:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:12.733 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.995 1 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.995 request: 00:24:12.995 { 00:24:12.995 "name": "NVMe0", 00:24:12.995 "trtype": "tcp", 00:24:12.995 "traddr": "10.0.0.2", 00:24:12.995 "adrfam": "ipv4", 00:24:12.995 "trsvcid": "4420", 00:24:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.995 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:12.995 "hostaddr": "10.0.0.2", 00:24:12.995 "hostsvcid": "60000", 00:24:12.995 "prchk_reftag": false, 00:24:12.995 "prchk_guard": false, 00:24:12.995 "hdgst": false, 00:24:12.995 "ddgst": false, 00:24:12.995 "method": "bdev_nvme_attach_controller", 00:24:12.995 "req_id": 1 00:24:12.995 } 00:24:12.995 Got JSON-RPC error response 00:24:12.995 response: 00:24:12.995 { 00:24:12.995 "code": -114, 00:24:12.995 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:12.995 } 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:12.995 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.996 request: 00:24:12.996 { 00:24:12.996 "name": "NVMe0", 00:24:12.996 "trtype": "tcp", 00:24:12.996 "traddr": "10.0.0.2", 00:24:12.996 "adrfam": "ipv4", 00:24:12.996 "trsvcid": "4420", 00:24:12.996 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.996 "hostaddr": "10.0.0.2", 00:24:12.996 "hostsvcid": "60000", 00:24:12.996 "prchk_reftag": false, 00:24:12.996 "prchk_guard": false, 00:24:12.996 "hdgst": false, 00:24:12.996 "ddgst": false, 00:24:12.996 "method": "bdev_nvme_attach_controller", 00:24:12.996 "req_id": 1 00:24:12.996 } 00:24:12.996 Got JSON-RPC error response 00:24:12.996 response: 00:24:12.996 { 00:24:12.996 "code": -114, 00:24:12.996 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:12.996 } 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.996 request: 00:24:12.996 { 00:24:12.996 "name": "NVMe0", 00:24:12.996 "trtype": "tcp", 00:24:12.996 "traddr": "10.0.0.2", 00:24:12.996 "adrfam": "ipv4", 00:24:12.996 "trsvcid": "4420", 00:24:12.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.996 "hostaddr": "10.0.0.2", 00:24:12.996 "hostsvcid": "60000", 00:24:12.996 "prchk_reftag": false, 00:24:12.996 "prchk_guard": false, 00:24:12.996 "hdgst": false, 00:24:12.996 "ddgst": false, 00:24:12.996 "multipath": "disable", 00:24:12.996 "method": "bdev_nvme_attach_controller", 00:24:12.996 "req_id": 1 00:24:12.996 } 00:24:12.996 Got JSON-RPC error response 00:24:12.996 response: 00:24:12.996 { 00:24:12.996 "code": -114, 00:24:12.996 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:12.996 } 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.996 request: 00:24:12.996 { 00:24:12.996 "name": "NVMe0", 00:24:12.996 "trtype": "tcp", 00:24:12.996 "traddr": "10.0.0.2", 00:24:12.996 "adrfam": "ipv4", 00:24:12.996 "trsvcid": "4420", 00:24:12.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.996 "hostaddr": "10.0.0.2", 00:24:12.996 "hostsvcid": "60000", 00:24:12.996 "prchk_reftag": false, 00:24:12.996 "prchk_guard": false, 00:24:12.996 "hdgst": false, 00:24:12.996 "ddgst": false, 00:24:12.996 "multipath": "failover", 00:24:12.996 "method": "bdev_nvme_attach_controller", 00:24:12.996 "req_id": 1 00:24:12.996 } 00:24:12.996 Got JSON-RPC error response 00:24:12.996 response: 00:24:12.996 { 00:24:12.996 "code": -114, 00:24:12.996 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:12.996 } 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.996 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.996 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.257 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:13.257 17:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:14.213 0 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1511185 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1511185 ']' 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1511185 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.213 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1511185 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1511185' 00:24:14.475 killing process with pid 1511185 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1511185 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1511185 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:14.475 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:14.475 [2024-07-25 17:03:32.017924] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:14.475 [2024-07-25 17:03:32.017981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511185 ] 00:24:14.475 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.475 [2024-07-25 17:03:32.076821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.475 [2024-07-25 17:03:32.141660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.475 [2024-07-25 17:03:33.295470] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 4456f773-5e67-4553-90b8-ee63b96f4219 already exists 00:24:14.475 [2024-07-25 17:03:33.295500] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:4456f773-5e67-4553-90b8-ee63b96f4219 alias for bdev NVMe1n1 00:24:14.475 [2024-07-25 17:03:33.295508] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:14.475 Running I/O for 1 seconds... 00:24:14.475 00:24:14.475 Latency(us) 00:24:14.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.475 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:14.475 NVMe0n1 : 1.00 27278.44 106.56 0.00 0.00 4681.15 2211.84 13762.56 00:24:14.475 =================================================================================================================== 00:24:14.475 Total : 27278.44 106.56 0.00 0.00 4681.15 2211.84 13762.56 00:24:14.475 Received shutdown signal, test time was about 1.000000 seconds 00:24:14.475 00:24:14.475 Latency(us) 00:24:14.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.475 =================================================================================================================== 00:24:14.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.475 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.475 rmmod nvme_tcp 00:24:14.475 rmmod nvme_fabrics 00:24:14.475 rmmod nvme_keyring 00:24:14.475 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1510907 ']' 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1510907 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1510907 ']' 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1510907 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1510907 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1510907' 00:24:14.737 killing process with pid 1510907 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1510907 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1510907 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.737 17:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:17.285 00:24:17.285 real 0m13.364s 00:24:17.285 user 0m16.278s 00:24:17.285 sys 0m6.113s 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.285 ************************************ 00:24:17.285 END TEST nvmf_multicontroller 00:24:17.285 ************************************ 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.285 ************************************ 00:24:17.285 START TEST nvmf_aer 00:24:17.285 ************************************ 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:17.285 * Looking for test storage... 00:24:17.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.285 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.286 17:03:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:23.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:23.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:23.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:23.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.879 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.880 17:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:24:23.880 00:24:23.880 --- 10.0.0.2 ping statistics --- 00:24:23.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.880 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:24:23.880 00:24:23.880 --- 10.0.0.1 ping statistics --- 00:24:23.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.880 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.880 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1515857 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1515857 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1515857 ']' 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.141 17:03:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:24.141 [2024-07-25 17:03:44.228384] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:24.141 [2024-07-25 17:03:44.228434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.141 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.141 [2024-07-25 17:03:44.293391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.141 [2024-07-25 17:03:44.359249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.141 [2024-07-25 17:03:44.359287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.141 [2024-07-25 17:03:44.359294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.141 [2024-07-25 17:03:44.359301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.141 [2024-07-25 17:03:44.359306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.141 [2024-07-25 17:03:44.359485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.141 [2024-07-25 17:03:44.359601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.141 [2024-07-25 17:03:44.359757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.141 [2024-07-25 17:03:44.359759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 [2024-07-25 17:03:45.050235] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 Malloc0 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 [2024-07-25 17:03:45.106886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.083 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 [ 00:24:25.084 { 00:24:25.084 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:25.084 "subtype": "Discovery", 00:24:25.084 "listen_addresses": [], 00:24:25.084 "allow_any_host": true, 00:24:25.084 "hosts": [] 00:24:25.084 }, 00:24:25.084 { 00:24:25.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.084 "subtype": "NVMe", 00:24:25.084 "listen_addresses": [ 00:24:25.084 { 00:24:25.084 "trtype": "TCP", 00:24:25.084 "adrfam": "IPv4", 00:24:25.084 "traddr": "10.0.0.2", 00:24:25.084 "trsvcid": "4420" 00:24:25.084 } 00:24:25.084 ], 00:24:25.084 "allow_any_host": true, 00:24:25.084 "hosts": [], 00:24:25.084 "serial_number": "SPDK00000000000001", 00:24:25.084 "model_number": "SPDK bdev Controller", 00:24:25.084 "max_namespaces": 2, 00:24:25.084 "min_cntlid": 1, 00:24:25.084 "max_cntlid": 65519, 00:24:25.084 "namespaces": [ 00:24:25.084 { 00:24:25.084 "nsid": 1, 00:24:25.084 "bdev_name": "Malloc0", 00:24:25.084 "name": "Malloc0", 00:24:25.084 "nguid": "11E55DF3AE8944009134E783E3694C2E", 00:24:25.084 "uuid": "11e55df3-ae89-4400-9134-e783e3694c2e" 00:24:25.084 } 00:24:25.084 ] 00:24:25.084 } 00:24:25.084 ] 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1516114 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:25.084 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.084 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 Malloc1 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 Asynchronous Event Request test 00:24:25.345 Attaching to 10.0.0.2 00:24:25.345 Attached to 10.0.0.2 00:24:25.345 Registering asynchronous event callbacks... 00:24:25.345 Starting namespace attribute notice tests for all controllers... 00:24:25.345 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:25.345 aer_cb - Changed Namespace 00:24:25.345 Cleaning up... 00:24:25.345 [ 00:24:25.345 { 00:24:25.345 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:25.345 "subtype": "Discovery", 00:24:25.345 "listen_addresses": [], 00:24:25.345 "allow_any_host": true, 00:24:25.345 "hosts": [] 00:24:25.345 }, 00:24:25.345 { 00:24:25.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.345 "subtype": "NVMe", 00:24:25.345 "listen_addresses": [ 00:24:25.345 { 00:24:25.345 "trtype": "TCP", 00:24:25.345 "adrfam": "IPv4", 00:24:25.345 "traddr": "10.0.0.2", 00:24:25.345 "trsvcid": "4420" 00:24:25.345 } 00:24:25.345 ], 00:24:25.345 "allow_any_host": true, 00:24:25.345 "hosts": [], 00:24:25.345 "serial_number": "SPDK00000000000001", 00:24:25.345 "model_number": "SPDK bdev Controller", 00:24:25.345 "max_namespaces": 2, 00:24:25.345 "min_cntlid": 1, 00:24:25.345 "max_cntlid": 65519, 00:24:25.345 "namespaces": [ 00:24:25.345 { 00:24:25.345 "nsid": 1, 00:24:25.345 "bdev_name": "Malloc0", 00:24:25.345 "name": "Malloc0", 00:24:25.345 "nguid": "11E55DF3AE8944009134E783E3694C2E", 00:24:25.345 "uuid": "11e55df3-ae89-4400-9134-e783e3694c2e" 00:24:25.345 }, 00:24:25.345 { 00:24:25.345 "nsid": 2, 00:24:25.345 "bdev_name": "Malloc1", 00:24:25.345 "name": "Malloc1", 00:24:25.345 "nguid": "8F032E2516E844F6B7FC211502C731D5", 00:24:25.345 "uuid": "8f032e25-16e8-44f6-b7fc-211502c731d5" 00:24:25.345 } 00:24:25.345 ] 00:24:25.345 } 00:24:25.345 ] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1516114 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.345 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.346 rmmod nvme_tcp 00:24:25.346 rmmod nvme_fabrics 00:24:25.346 rmmod nvme_keyring 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1515857 ']' 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1515857 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1515857 ']' 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1515857 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1515857 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1515857' 00:24:25.346 killing process with pid 1515857 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1515857 00:24:25.346 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1515857 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.607 17:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.578 00:24:27.578 real 0m10.692s 00:24:27.578 user 0m7.485s 00:24:27.578 sys 0m5.512s 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.578 ************************************ 00:24:27.578 END TEST nvmf_aer 00:24:27.578 ************************************ 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:27.578 17:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.840 ************************************ 00:24:27.840 START TEST nvmf_async_init 00:24:27.840 ************************************ 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:27.840 * Looking for test storage... 00:24:27.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.840 17:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5200f59eda5f401e9d049fbc95e1d297 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:27.840 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.841 17:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.994 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:35.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:35.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:35.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:35.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.995 17:03:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:24:35.995 00:24:35.995 --- 10.0.0.2 ping statistics --- 00:24:35.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.995 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:24:35.995 00:24:35.995 --- 10.0.0.1 ping statistics --- 00:24:35.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.995 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.995 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1520215 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1520215 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1520215 ']' 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.996 17:03:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 [2024-07-25 17:03:55.365012] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:35.996 [2024-07-25 17:03:55.365081] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.996 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.996 [2024-07-25 17:03:55.437661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.996 [2024-07-25 17:03:55.512523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.996 [2024-07-25 17:03:55.512564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.996 [2024-07-25 17:03:55.512572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.996 [2024-07-25 17:03:55.512578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.996 [2024-07-25 17:03:55.512584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.996 [2024-07-25 17:03:55.512610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 [2024-07-25 17:03:56.171705] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 null0 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5200f59eda5f401e9d049fbc95e1d297 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.996 [2024-07-25 17:03:56.231977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.996 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.258 nvme0n1 00:24:36.258 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.258 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:36.258 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.258 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.258 [ 00:24:36.258 { 00:24:36.258 "name": "nvme0n1", 00:24:36.258 "aliases": [ 00:24:36.258 "5200f59e-da5f-401e-9d04-9fbc95e1d297" 00:24:36.258 ], 00:24:36.258 "product_name": "NVMe disk", 00:24:36.258 "block_size": 512, 00:24:36.258 "num_blocks": 2097152, 00:24:36.258 "uuid": "5200f59e-da5f-401e-9d04-9fbc95e1d297", 00:24:36.258 "assigned_rate_limits": { 00:24:36.258 "rw_ios_per_sec": 0, 00:24:36.258 "rw_mbytes_per_sec": 0, 00:24:36.258 "r_mbytes_per_sec": 0, 00:24:36.258 "w_mbytes_per_sec": 0 00:24:36.258 }, 00:24:36.258 "claimed": false, 00:24:36.258 "zoned": false, 00:24:36.258 "supported_io_types": { 00:24:36.258 "read": true, 00:24:36.258 "write": true, 00:24:36.258 "unmap": false, 00:24:36.258 "flush": true, 00:24:36.258 "reset": true, 00:24:36.258 "nvme_admin": true, 00:24:36.258 "nvme_io": true, 00:24:36.258 "nvme_io_md": false, 00:24:36.258 "write_zeroes": true, 00:24:36.258 "zcopy": false, 00:24:36.258 "get_zone_info": false, 00:24:36.258 "zone_management": false, 00:24:36.258 "zone_append": false, 00:24:36.258 "compare": true, 00:24:36.258 "compare_and_write": true, 00:24:36.258 "abort": true, 00:24:36.258 "seek_hole": false, 00:24:36.258 "seek_data": false, 00:24:36.258 "copy": true, 00:24:36.258 "nvme_iov_md": false 00:24:36.258 }, 00:24:36.258 "memory_domains": [ 00:24:36.258 { 00:24:36.258 "dma_device_id": "system", 00:24:36.258 "dma_device_type": 1 00:24:36.258 } 00:24:36.258 ], 00:24:36.258 "driver_specific": { 00:24:36.258 "nvme": [ 00:24:36.258 { 00:24:36.258 "trid": { 00:24:36.259 "trtype": "TCP", 00:24:36.259 "adrfam": "IPv4", 00:24:36.259 "traddr": "10.0.0.2", 00:24:36.259 "trsvcid": "4420", 00:24:36.259 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:36.259 }, 00:24:36.259 "ctrlr_data": { 00:24:36.259 "cntlid": 1, 00:24:36.259 "vendor_id": "0x8086", 00:24:36.259 "model_number": "SPDK bdev Controller", 00:24:36.259 "serial_number": "00000000000000000000", 00:24:36.259 "firmware_revision": "24.09", 00:24:36.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:36.259 "oacs": { 00:24:36.259 "security": 0, 00:24:36.259 "format": 0, 00:24:36.259 "firmware": 0, 00:24:36.259 "ns_manage": 0 00:24:36.259 }, 00:24:36.259 "multi_ctrlr": true, 00:24:36.259 "ana_reporting": false 00:24:36.259 }, 00:24:36.259 "vs": { 00:24:36.259 "nvme_version": "1.3" 00:24:36.259 }, 00:24:36.259 "ns_data": { 00:24:36.259 "id": 1, 00:24:36.259 "can_share": true 00:24:36.259 } 00:24:36.259 } 00:24:36.259 ], 00:24:36.259 "mp_policy": "active_passive" 00:24:36.259 } 00:24:36.259 } 00:24:36.259 ] 00:24:36.259 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.259 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:36.259 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.259 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.259 [2024-07-25 17:03:56.505844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:36.259 [2024-07-25 17:03:56.505905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x724f40 (9): Bad file descriptor 00:24:36.521 [2024-07-25 17:03:56.638302] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.521 [ 00:24:36.521 { 00:24:36.521 "name": "nvme0n1", 00:24:36.521 "aliases": [ 00:24:36.521 "5200f59e-da5f-401e-9d04-9fbc95e1d297" 00:24:36.521 ], 00:24:36.521 "product_name": "NVMe disk", 00:24:36.521 "block_size": 512, 00:24:36.521 "num_blocks": 2097152, 00:24:36.521 "uuid": "5200f59e-da5f-401e-9d04-9fbc95e1d297", 00:24:36.521 "assigned_rate_limits": { 00:24:36.521 "rw_ios_per_sec": 0, 00:24:36.521 "rw_mbytes_per_sec": 0, 00:24:36.521 "r_mbytes_per_sec": 0, 00:24:36.521 "w_mbytes_per_sec": 0 00:24:36.521 }, 00:24:36.521 "claimed": false, 00:24:36.521 "zoned": false, 00:24:36.521 "supported_io_types": { 00:24:36.521 "read": true, 00:24:36.521 "write": true, 00:24:36.521 "unmap": false, 00:24:36.521 "flush": true, 00:24:36.521 "reset": true, 00:24:36.521 "nvme_admin": true, 00:24:36.521 "nvme_io": true, 00:24:36.521 "nvme_io_md": false, 00:24:36.521 "write_zeroes": true, 00:24:36.521 "zcopy": false, 00:24:36.521 "get_zone_info": false, 00:24:36.521 "zone_management": false, 00:24:36.521 "zone_append": false, 00:24:36.521 "compare": true, 00:24:36.521 "compare_and_write": true, 00:24:36.521 "abort": true, 00:24:36.521 "seek_hole": false, 00:24:36.521 "seek_data": false, 00:24:36.521 "copy": true, 00:24:36.521 "nvme_iov_md": false 00:24:36.521 }, 00:24:36.521 "memory_domains": [ 00:24:36.521 { 00:24:36.521 "dma_device_id": "system", 00:24:36.521 "dma_device_type": 1 00:24:36.521 } 00:24:36.521 ], 00:24:36.521 "driver_specific": { 00:24:36.521 "nvme": [ 00:24:36.521 { 00:24:36.521 "trid": { 00:24:36.521 "trtype": "TCP", 00:24:36.521 "adrfam": "IPv4", 00:24:36.521 "traddr": "10.0.0.2", 00:24:36.521 "trsvcid": "4420", 00:24:36.521 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:36.521 }, 00:24:36.521 "ctrlr_data": { 00:24:36.521 "cntlid": 2, 00:24:36.521 "vendor_id": "0x8086", 00:24:36.521 "model_number": "SPDK bdev Controller", 00:24:36.521 "serial_number": "00000000000000000000", 00:24:36.521 "firmware_revision": "24.09", 00:24:36.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:36.521 "oacs": { 00:24:36.521 "security": 0, 00:24:36.521 "format": 0, 00:24:36.521 "firmware": 0, 00:24:36.521 "ns_manage": 0 00:24:36.521 }, 00:24:36.521 "multi_ctrlr": true, 00:24:36.521 "ana_reporting": false 00:24:36.521 }, 00:24:36.521 "vs": { 00:24:36.521 "nvme_version": "1.3" 00:24:36.521 }, 00:24:36.521 "ns_data": { 00:24:36.521 "id": 1, 00:24:36.521 "can_share": true 00:24:36.521 } 00:24:36.521 } 00:24:36.521 ], 00:24:36.521 "mp_policy": "active_passive" 00:24:36.521 } 00:24:36.521 } 00:24:36.521 ] 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DOsYEZt9e9 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DOsYEZt9e9 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.521 [2024-07-25 17:03:56.714494] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.521 [2024-07-25 17:03:56.714613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.521 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DOsYEZt9e9 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.522 [2024-07-25 17:03:56.726519] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DOsYEZt9e9 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.522 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.522 [2024-07-25 17:03:56.738569] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.522 [2024-07-25 17:03:56.738608] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:36.784 nvme0n1 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.784 [ 00:24:36.784 { 00:24:36.784 "name": "nvme0n1", 00:24:36.784 "aliases": [ 00:24:36.784 "5200f59e-da5f-401e-9d04-9fbc95e1d297" 00:24:36.784 ], 00:24:36.784 "product_name": "NVMe disk", 00:24:36.784 "block_size": 512, 00:24:36.784 "num_blocks": 2097152, 00:24:36.784 "uuid": "5200f59e-da5f-401e-9d04-9fbc95e1d297", 00:24:36.784 "assigned_rate_limits": { 00:24:36.784 "rw_ios_per_sec": 0, 00:24:36.784 "rw_mbytes_per_sec": 0, 00:24:36.784 "r_mbytes_per_sec": 0, 00:24:36.784 "w_mbytes_per_sec": 0 00:24:36.784 }, 00:24:36.784 "claimed": false, 00:24:36.784 "zoned": false, 00:24:36.784 "supported_io_types": { 00:24:36.784 "read": true, 00:24:36.784 "write": true, 00:24:36.784 "unmap": false, 00:24:36.784 "flush": true, 00:24:36.784 "reset": true, 00:24:36.784 "nvme_admin": true, 00:24:36.784 "nvme_io": true, 00:24:36.784 "nvme_io_md": false, 00:24:36.784 "write_zeroes": true, 00:24:36.784 "zcopy": false, 00:24:36.784 "get_zone_info": false, 00:24:36.784 "zone_management": false, 00:24:36.784 "zone_append": false, 00:24:36.784 "compare": true, 00:24:36.784 "compare_and_write": true, 00:24:36.784 "abort": true, 00:24:36.784 "seek_hole": false, 00:24:36.784 "seek_data": false, 00:24:36.784 "copy": true, 00:24:36.784 "nvme_iov_md": false 00:24:36.784 }, 00:24:36.784 "memory_domains": [ 00:24:36.784 { 00:24:36.784 "dma_device_id": "system", 00:24:36.784 "dma_device_type": 1 00:24:36.784 } 00:24:36.784 ], 00:24:36.784 "driver_specific": { 00:24:36.784 "nvme": [ 00:24:36.784 { 00:24:36.784 "trid": { 00:24:36.784 "trtype": "TCP", 00:24:36.784 "adrfam": "IPv4", 00:24:36.784 "traddr": "10.0.0.2", 00:24:36.784 "trsvcid": "4421", 00:24:36.784 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:36.784 }, 00:24:36.784 "ctrlr_data": { 00:24:36.784 "cntlid": 3, 00:24:36.784 "vendor_id": "0x8086", 00:24:36.784 "model_number": "SPDK bdev Controller", 00:24:36.784 "serial_number": "00000000000000000000", 00:24:36.784 "firmware_revision": "24.09", 00:24:36.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:36.784 "oacs": { 00:24:36.784 "security": 0, 00:24:36.784 "format": 0, 00:24:36.784 "firmware": 0, 00:24:36.784 "ns_manage": 0 00:24:36.784 }, 00:24:36.784 "multi_ctrlr": true, 00:24:36.784 "ana_reporting": false 00:24:36.784 }, 00:24:36.784 "vs": { 00:24:36.784 "nvme_version": "1.3" 00:24:36.784 }, 00:24:36.784 "ns_data": { 00:24:36.784 "id": 1, 00:24:36.784 "can_share": true 00:24:36.784 } 00:24:36.784 } 00:24:36.784 ], 00:24:36.784 "mp_policy": "active_passive" 00:24:36.784 } 00:24:36.784 } 00:24:36.784 ] 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.DOsYEZt9e9 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.784 rmmod nvme_tcp 00:24:36.784 rmmod nvme_fabrics 00:24:36.784 rmmod nvme_keyring 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1520215 ']' 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1520215 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1520215 ']' 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1520215 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520215 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520215' 00:24:36.784 killing process with pid 1520215 00:24:36.784 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1520215 00:24:36.784 [2024-07-25 17:03:56.980144] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:36.785 [2024-07-25 17:03:56.980171] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:36.785 17:03:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1520215 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.047 17:03:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.964 00:24:38.964 real 0m11.311s 00:24:38.964 user 0m3.989s 00:24:38.964 sys 0m5.785s 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.964 ************************************ 00:24:38.964 END TEST nvmf_async_init 00:24:38.964 ************************************ 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:38.964 17:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.227 ************************************ 00:24:39.227 START TEST dma 00:24:39.227 ************************************ 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:39.227 * Looking for test storage... 00:24:39.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:39.227 00:24:39.227 real 0m0.138s 00:24:39.227 user 0m0.071s 00:24:39.227 sys 0m0.076s 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:39.227 ************************************ 00:24:39.227 END TEST dma 00:24:39.227 ************************************ 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.227 ************************************ 00:24:39.227 START TEST nvmf_identify 00:24:39.227 ************************************ 00:24:39.227 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:39.490 * Looking for test storage... 00:24:39.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.490 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.491 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.491 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.491 17:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:46.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:46.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:46.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:46.089 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.089 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:24:46.090 00:24:46.090 --- 10.0.0.2 ping statistics --- 00:24:46.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.090 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:24:46.090 00:24:46.090 --- 10.0.0.1 ping statistics --- 00:24:46.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.090 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.090 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1524674 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1524674 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1524674 ']' 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.351 17:04:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.351 [2024-07-25 17:04:06.416663] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:46.351 [2024-07-25 17:04:06.416729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.351 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.351 [2024-07-25 17:04:06.489133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.351 [2024-07-25 17:04:06.566222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.351 [2024-07-25 17:04:06.566265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.351 [2024-07-25 17:04:06.566273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.351 [2024-07-25 17:04:06.566279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.351 [2024-07-25 17:04:06.566285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.351 [2024-07-25 17:04:06.566365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.351 [2024-07-25 17:04:06.566619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.351 [2024-07-25 17:04:06.566775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.351 [2024-07-25 17:04:06.566774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.924 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:46.924 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 [2024-07-25 17:04:07.201985] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 Malloc0 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 [2024-07-25 17:04:07.301547] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.187 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.187 [ 00:24:47.187 { 00:24:47.187 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.187 "subtype": "Discovery", 00:24:47.187 "listen_addresses": [ 00:24:47.187 { 00:24:47.187 "trtype": "TCP", 00:24:47.187 "adrfam": "IPv4", 00:24:47.187 "traddr": "10.0.0.2", 00:24:47.187 "trsvcid": "4420" 00:24:47.187 } 00:24:47.187 ], 00:24:47.187 "allow_any_host": true, 00:24:47.187 "hosts": [] 00:24:47.187 }, 00:24:47.187 { 00:24:47.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.188 "subtype": "NVMe", 00:24:47.188 "listen_addresses": [ 00:24:47.188 { 00:24:47.188 "trtype": "TCP", 00:24:47.188 "adrfam": "IPv4", 00:24:47.188 "traddr": "10.0.0.2", 00:24:47.188 "trsvcid": "4420" 00:24:47.188 } 00:24:47.188 ], 00:24:47.188 "allow_any_host": true, 00:24:47.188 "hosts": [], 00:24:47.188 "serial_number": "SPDK00000000000001", 00:24:47.188 "model_number": "SPDK bdev Controller", 00:24:47.188 "max_namespaces": 32, 00:24:47.188 "min_cntlid": 1, 00:24:47.188 "max_cntlid": 65519, 00:24:47.188 "namespaces": [ 00:24:47.188 { 00:24:47.188 "nsid": 1, 00:24:47.188 "bdev_name": "Malloc0", 00:24:47.188 "name": "Malloc0", 00:24:47.188 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:47.188 "eui64": "ABCDEF0123456789", 00:24:47.188 "uuid": "6419dd7a-9558-4e51-91cf-3d4ea9e0ef3f" 00:24:47.188 } 00:24:47.188 ] 00:24:47.188 } 00:24:47.188 ] 00:24:47.188 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.188 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:47.188 [2024-07-25 17:04:07.362538] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:47.188 [2024-07-25 17:04:07.362584] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524946 ] 00:24:47.188 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.188 [2024-07-25 17:04:07.400875] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:47.188 [2024-07-25 17:04:07.400924] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.188 [2024-07-25 17:04:07.400930] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.188 [2024-07-25 17:04:07.400941] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.188 [2024-07-25 17:04:07.400949] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.188 [2024-07-25 17:04:07.401544] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:47.188 [2024-07-25 17:04:07.401572] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21d4ec0 0 00:24:47.188 [2024-07-25 17:04:07.415209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.188 [2024-07-25 17:04:07.415229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.188 [2024-07-25 17:04:07.415235] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.188 [2024-07-25 17:04:07.415239] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.188 [2024-07-25 17:04:07.415278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.415284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.415289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.188 [2024-07-25 17:04:07.415303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.188 [2024-07-25 17:04:07.415320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.188 [2024-07-25 17:04:07.423213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.188 [2024-07-25 17:04:07.423223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.188 [2024-07-25 17:04:07.423227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.188 [2024-07-25 17:04:07.423241] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.188 [2024-07-25 17:04:07.423248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:47.188 [2024-07-25 17:04:07.423253] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:47.188 [2024-07-25 17:04:07.423266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.188 [2024-07-25 17:04:07.423285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.188 [2024-07-25 17:04:07.423298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.188 [2024-07-25 17:04:07.423573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.188 [2024-07-25 17:04:07.423582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.188 [2024-07-25 17:04:07.423585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.188 [2024-07-25 17:04:07.423598] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:47.188 [2024-07-25 17:04:07.423606] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:47.188 [2024-07-25 17:04:07.423613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.188 [2024-07-25 17:04:07.423628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.188 [2024-07-25 17:04:07.423640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.188 [2024-07-25 17:04:07.423882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.188 [2024-07-25 17:04:07.423889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.188 [2024-07-25 17:04:07.423892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.188 [2024-07-25 17:04:07.423902] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:47.188 [2024-07-25 17:04:07.423909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.188 [2024-07-25 17:04:07.423916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.423923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.188 [2024-07-25 17:04:07.423930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.188 [2024-07-25 17:04:07.423940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.188 [2024-07-25 17:04:07.424188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.188 [2024-07-25 17:04:07.424195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.188 [2024-07-25 17:04:07.424198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.424208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.188 [2024-07-25 17:04:07.424214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.188 [2024-07-25 17:04:07.424223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.424227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.424230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.188 [2024-07-25 17:04:07.424237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.188 [2024-07-25 17:04:07.424252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.188 [2024-07-25 17:04:07.424549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.188 [2024-07-25 17:04:07.424555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.188 [2024-07-25 17:04:07.424559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.424562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.188 [2024-07-25 17:04:07.424567] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:47.188 [2024-07-25 17:04:07.424572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:47.188 [2024-07-25 17:04:07.424579] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.188 [2024-07-25 17:04:07.424685] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:47.188 [2024-07-25 17:04:07.424689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.188 [2024-07-25 17:04:07.424698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.424702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.424705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.188 [2024-07-25 17:04:07.424712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.188 [2024-07-25 17:04:07.424723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.188 [2024-07-25 17:04:07.424989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.188 [2024-07-25 17:04:07.424995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.188 [2024-07-25 17:04:07.424999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.425003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.188 [2024-07-25 17:04:07.425008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.188 [2024-07-25 17:04:07.425017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.188 [2024-07-25 17:04:07.425020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.425024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.425030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.189 [2024-07-25 17:04:07.425041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.189 [2024-07-25 17:04:07.425298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.189 [2024-07-25 17:04:07.425305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.189 [2024-07-25 17:04:07.425309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.425313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.189 [2024-07-25 17:04:07.425317] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.189 [2024-07-25 17:04:07.425322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:47.189 [2024-07-25 17:04:07.425329] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:47.189 [2024-07-25 17:04:07.425345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.189 [2024-07-25 17:04:07.425355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.425358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.425366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.189 [2024-07-25 17:04:07.425378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.189 [2024-07-25 17:04:07.425774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.189 [2024-07-25 17:04:07.425780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.189 [2024-07-25 17:04:07.425784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.425788] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21d4ec0): datao=0, datal=4096, cccid=0 00:24:47.189 [2024-07-25 17:04:07.425793] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2257e40) on tqpair(0x21d4ec0): expected_datao=0, payload_size=4096 00:24:47.189 [2024-07-25 17:04:07.425798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.425945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.425950] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.189 [2024-07-25 17:04:07.426162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.189 [2024-07-25 17:04:07.426165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.189 [2024-07-25 17:04:07.426177] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:47.189 [2024-07-25 17:04:07.426182] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:47.189 [2024-07-25 17:04:07.426186] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:47.189 [2024-07-25 17:04:07.426191] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:47.189 [2024-07-25 17:04:07.426196] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:47.189 [2024-07-25 17:04:07.426207] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:47.189 [2024-07-25 17:04:07.426215] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.189 [2024-07-25 17:04:07.426226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.426241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.189 [2024-07-25 17:04:07.426254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.189 [2024-07-25 17:04:07.426497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.189 [2024-07-25 17:04:07.426504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.189 [2024-07-25 17:04:07.426507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.189 [2024-07-25 17:04:07.426523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.426536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.189 [2024-07-25 17:04:07.426542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.426555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.189 [2024-07-25 17:04:07.426561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.426574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.189 [2024-07-25 17:04:07.426580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.426593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.189 [2024-07-25 17:04:07.426597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.189 [2024-07-25 17:04:07.426608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.189 [2024-07-25 17:04:07.426615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.426625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.189 [2024-07-25 17:04:07.426638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257e40, cid 0, qid 0 00:24:47.189 [2024-07-25 17:04:07.426643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257fc0, cid 1, qid 0 00:24:47.189 [2024-07-25 17:04:07.426648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2258140, cid 2, qid 0 00:24:47.189 [2024-07-25 17:04:07.426652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22582c0, cid 3, qid 0 00:24:47.189 [2024-07-25 17:04:07.426657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2258440, cid 4, qid 0 00:24:47.189 [2024-07-25 17:04:07.426977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.189 [2024-07-25 17:04:07.426983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.189 [2024-07-25 17:04:07.426987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.426991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2258440) on tqpair=0x21d4ec0 00:24:47.189 [2024-07-25 17:04:07.426996] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:47.189 [2024-07-25 17:04:07.427001] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:47.189 [2024-07-25 17:04:07.427012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.427018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.427025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.189 [2024-07-25 17:04:07.427036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2258440, cid 4, qid 0 00:24:47.189 [2024-07-25 17:04:07.431212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.189 [2024-07-25 17:04:07.431220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.189 [2024-07-25 17:04:07.431224] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.431227] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21d4ec0): datao=0, datal=4096, cccid=4 00:24:47.189 [2024-07-25 17:04:07.431232] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2258440) on tqpair(0x21d4ec0): expected_datao=0, payload_size=4096 00:24:47.189 [2024-07-25 17:04:07.431236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.431243] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.431246] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.431252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.189 [2024-07-25 17:04:07.431258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.189 [2024-07-25 17:04:07.431261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.431265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2258440) on tqpair=0x21d4ec0 00:24:47.189 [2024-07-25 17:04:07.431278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:47.189 [2024-07-25 17:04:07.431301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.189 [2024-07-25 17:04:07.431306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21d4ec0) 00:24:47.189 [2024-07-25 17:04:07.431312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.189 [2024-07-25 17:04:07.431320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21d4ec0) 00:24:47.190 [2024-07-25 17:04:07.431334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.190 [2024-07-25 17:04:07.431349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2258440, cid 4, qid 0 00:24:47.190 [2024-07-25 17:04:07.431355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22585c0, cid 5, qid 0 00:24:47.190 [2024-07-25 17:04:07.431677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.190 [2024-07-25 17:04:07.431684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.190 [2024-07-25 17:04:07.431687] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431691] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21d4ec0): datao=0, datal=1024, cccid=4 00:24:47.190 [2024-07-25 17:04:07.431695] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2258440) on tqpair(0x21d4ec0): expected_datao=0, payload_size=1024 00:24:47.190 [2024-07-25 17:04:07.431699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431706] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431709] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.190 [2024-07-25 17:04:07.431721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.190 [2024-07-25 17:04:07.431724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.190 [2024-07-25 17:04:07.431731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22585c0) on tqpair=0x21d4ec0 00:24:47.480 [2024-07-25 17:04:07.472485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.480 [2024-07-25 17:04:07.472497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.480 [2024-07-25 17:04:07.472501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.472505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2258440) on tqpair=0x21d4ec0 00:24:47.480 [2024-07-25 17:04:07.472525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.472530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21d4ec0) 00:24:47.480 [2024-07-25 17:04:07.472538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.480 [2024-07-25 17:04:07.472555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2258440, cid 4, qid 0 00:24:47.480 [2024-07-25 17:04:07.472811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.480 [2024-07-25 17:04:07.472818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.480 [2024-07-25 17:04:07.472822] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.472826] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21d4ec0): datao=0, datal=3072, cccid=4 00:24:47.480 [2024-07-25 17:04:07.472830] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2258440) on tqpair(0x21d4ec0): expected_datao=0, payload_size=3072 00:24:47.480 [2024-07-25 17:04:07.472835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.472842] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.472846] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.473016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.480 [2024-07-25 17:04:07.473023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.480 [2024-07-25 17:04:07.473026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.473030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2258440) on tqpair=0x21d4ec0 00:24:47.480 [2024-07-25 17:04:07.473039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.473043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21d4ec0) 00:24:47.480 [2024-07-25 17:04:07.473049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.480 [2024-07-25 17:04:07.473064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2258440, cid 4, qid 0 00:24:47.480 [2024-07-25 17:04:07.473296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.480 [2024-07-25 17:04:07.473303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.480 [2024-07-25 17:04:07.473307] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.473310] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21d4ec0): datao=0, datal=8, cccid=4 00:24:47.480 [2024-07-25 17:04:07.473315] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2258440) on tqpair(0x21d4ec0): expected_datao=0, payload_size=8 00:24:47.480 [2024-07-25 17:04:07.473319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.473326] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.473329] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.514449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.480 [2024-07-25 17:04:07.514461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.480 [2024-07-25 17:04:07.514464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.480 [2024-07-25 17:04:07.514472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2258440) on tqpair=0x21d4ec0 00:24:47.480 ===================================================== 00:24:47.480 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:47.480 ===================================================== 00:24:47.480 Controller Capabilities/Features 00:24:47.480 ================================ 00:24:47.480 Vendor ID: 0000 00:24:47.480 Subsystem Vendor ID: 0000 00:24:47.480 Serial Number: .................... 00:24:47.480 Model Number: ........................................ 00:24:47.480 Firmware Version: 24.09 00:24:47.480 Recommended Arb Burst: 0 00:24:47.480 IEEE OUI Identifier: 00 00 00 00:24:47.480 Multi-path I/O 00:24:47.480 May have multiple subsystem ports: No 00:24:47.480 May have multiple controllers: No 00:24:47.480 Associated with SR-IOV VF: No 00:24:47.480 Max Data Transfer Size: 131072 00:24:47.480 Max Number of Namespaces: 0 00:24:47.480 Max Number of I/O Queues: 1024 00:24:47.480 NVMe Specification Version (VS): 1.3 00:24:47.480 NVMe Specification Version (Identify): 1.3 00:24:47.480 Maximum Queue Entries: 128 00:24:47.480 Contiguous Queues Required: Yes 00:24:47.480 Arbitration Mechanisms Supported 00:24:47.480 Weighted Round Robin: Not Supported 00:24:47.480 Vendor Specific: Not Supported 00:24:47.480 Reset Timeout: 15000 ms 00:24:47.480 Doorbell Stride: 4 bytes 00:24:47.480 NVM Subsystem Reset: Not Supported 00:24:47.480 Command Sets Supported 00:24:47.480 NVM Command Set: Supported 00:24:47.480 Boot Partition: Not Supported 00:24:47.480 Memory Page Size Minimum: 4096 bytes 00:24:47.480 Memory Page Size Maximum: 4096 bytes 00:24:47.480 Persistent Memory Region: Not Supported 00:24:47.480 Optional Asynchronous Events Supported 00:24:47.480 Namespace Attribute Notices: Not Supported 00:24:47.480 Firmware Activation Notices: Not Supported 00:24:47.480 ANA Change Notices: Not Supported 00:24:47.480 PLE Aggregate Log Change Notices: Not Supported 00:24:47.480 LBA Status Info Alert Notices: Not Supported 00:24:47.480 EGE Aggregate Log Change Notices: Not Supported 00:24:47.480 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.480 Zone Descriptor Change Notices: Not Supported 00:24:47.480 Discovery Log Change Notices: Supported 00:24:47.480 Controller Attributes 00:24:47.480 128-bit Host Identifier: Not Supported 00:24:47.480 Non-Operational Permissive Mode: Not Supported 00:24:47.480 NVM Sets: Not Supported 00:24:47.480 Read Recovery Levels: Not Supported 00:24:47.480 Endurance Groups: Not Supported 00:24:47.480 Predictable Latency Mode: Not Supported 00:24:47.480 Traffic Based Keep ALive: Not Supported 00:24:47.480 Namespace Granularity: Not Supported 00:24:47.480 SQ Associations: Not Supported 00:24:47.480 UUID List: Not Supported 00:24:47.480 Multi-Domain Subsystem: Not Supported 00:24:47.480 Fixed Capacity Management: Not Supported 00:24:47.480 Variable Capacity Management: Not Supported 00:24:47.480 Delete Endurance Group: Not Supported 00:24:47.480 Delete NVM Set: Not Supported 00:24:47.480 Extended LBA Formats Supported: Not Supported 00:24:47.480 Flexible Data Placement Supported: Not Supported 00:24:47.480 00:24:47.480 Controller Memory Buffer Support 00:24:47.480 ================================ 00:24:47.480 Supported: No 00:24:47.480 00:24:47.480 Persistent Memory Region Support 00:24:47.480 ================================ 00:24:47.480 Supported: No 00:24:47.480 00:24:47.480 Admin Command Set Attributes 00:24:47.480 ============================ 00:24:47.480 Security Send/Receive: Not Supported 00:24:47.480 Format NVM: Not Supported 00:24:47.480 Firmware Activate/Download: Not Supported 00:24:47.480 Namespace Management: Not Supported 00:24:47.480 Device Self-Test: Not Supported 00:24:47.480 Directives: Not Supported 00:24:47.480 NVMe-MI: Not Supported 00:24:47.480 Virtualization Management: Not Supported 00:24:47.480 Doorbell Buffer Config: Not Supported 00:24:47.480 Get LBA Status Capability: Not Supported 00:24:47.480 Command & Feature Lockdown Capability: Not Supported 00:24:47.480 Abort Command Limit: 1 00:24:47.480 Async Event Request Limit: 4 00:24:47.480 Number of Firmware Slots: N/A 00:24:47.480 Firmware Slot 1 Read-Only: N/A 00:24:47.480 Firmware Activation Without Reset: N/A 00:24:47.480 Multiple Update Detection Support: N/A 00:24:47.480 Firmware Update Granularity: No Information Provided 00:24:47.480 Per-Namespace SMART Log: No 00:24:47.480 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.480 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:47.480 Command Effects Log Page: Not Supported 00:24:47.480 Get Log Page Extended Data: Supported 00:24:47.480 Telemetry Log Pages: Not Supported 00:24:47.480 Persistent Event Log Pages: Not Supported 00:24:47.480 Supported Log Pages Log Page: May Support 00:24:47.480 Commands Supported & Effects Log Page: Not Supported 00:24:47.480 Feature Identifiers & Effects Log Page:May Support 00:24:47.480 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.480 Data Area 4 for Telemetry Log: Not Supported 00:24:47.480 Error Log Page Entries Supported: 128 00:24:47.480 Keep Alive: Not Supported 00:24:47.480 00:24:47.480 NVM Command Set Attributes 00:24:47.480 ========================== 00:24:47.480 Submission Queue Entry Size 00:24:47.480 Max: 1 00:24:47.480 Min: 1 00:24:47.480 Completion Queue Entry Size 00:24:47.480 Max: 1 00:24:47.480 Min: 1 00:24:47.480 Number of Namespaces: 0 00:24:47.480 Compare Command: Not Supported 00:24:47.480 Write Uncorrectable Command: Not Supported 00:24:47.480 Dataset Management Command: Not Supported 00:24:47.480 Write Zeroes Command: Not Supported 00:24:47.480 Set Features Save Field: Not Supported 00:24:47.480 Reservations: Not Supported 00:24:47.480 Timestamp: Not Supported 00:24:47.480 Copy: Not Supported 00:24:47.480 Volatile Write Cache: Not Present 00:24:47.480 Atomic Write Unit (Normal): 1 00:24:47.480 Atomic Write Unit (PFail): 1 00:24:47.480 Atomic Compare & Write Unit: 1 00:24:47.480 Fused Compare & Write: Supported 00:24:47.480 Scatter-Gather List 00:24:47.480 SGL Command Set: Supported 00:24:47.480 SGL Keyed: Supported 00:24:47.480 SGL Bit Bucket Descriptor: Not Supported 00:24:47.480 SGL Metadata Pointer: Not Supported 00:24:47.480 Oversized SGL: Not Supported 00:24:47.480 SGL Metadata Address: Not Supported 00:24:47.480 SGL Offset: Supported 00:24:47.480 Transport SGL Data Block: Not Supported 00:24:47.480 Replay Protected Memory Block: Not Supported 00:24:47.480 00:24:47.480 Firmware Slot Information 00:24:47.480 ========================= 00:24:47.480 Active slot: 0 00:24:47.480 00:24:47.480 00:24:47.480 Error Log 00:24:47.480 ========= 00:24:47.480 00:24:47.480 Active Namespaces 00:24:47.480 ================= 00:24:47.480 Discovery Log Page 00:24:47.480 ================== 00:24:47.480 Generation Counter: 2 00:24:47.480 Number of Records: 2 00:24:47.480 Record Format: 0 00:24:47.480 00:24:47.480 Discovery Log Entry 0 00:24:47.480 ---------------------- 00:24:47.480 Transport Type: 3 (TCP) 00:24:47.480 Address Family: 1 (IPv4) 00:24:47.480 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:47.480 Entry Flags: 00:24:47.480 Duplicate Returned Information: 1 00:24:47.480 Explicit Persistent Connection Support for Discovery: 1 00:24:47.480 Transport Requirements: 00:24:47.480 Secure Channel: Not Required 00:24:47.480 Port ID: 0 (0x0000) 00:24:47.480 Controller ID: 65535 (0xffff) 00:24:47.480 Admin Max SQ Size: 128 00:24:47.480 Transport Service Identifier: 4420 00:24:47.480 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:47.480 Transport Address: 10.0.0.2 00:24:47.480 Discovery Log Entry 1 00:24:47.480 ---------------------- 00:24:47.480 Transport Type: 3 (TCP) 00:24:47.480 Address Family: 1 (IPv4) 00:24:47.480 Subsystem Type: 2 (NVM Subsystem) 00:24:47.480 Entry Flags: 00:24:47.480 Duplicate Returned Information: 0 00:24:47.480 Explicit Persistent Connection Support for Discovery: 0 00:24:47.480 Transport Requirements: 00:24:47.480 Secure Channel: Not Required 00:24:47.481 Port ID: 0 (0x0000) 00:24:47.481 Controller ID: 65535 (0xffff) 00:24:47.481 Admin Max SQ Size: 128 00:24:47.481 Transport Service Identifier: 4420 00:24:47.481 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:47.481 Transport Address: 10.0.0.2 [2024-07-25 17:04:07.514556] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:47.481 [2024-07-25 17:04:07.514567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257e40) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.514574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.481 [2024-07-25 17:04:07.514580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257fc0) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.514584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.481 [2024-07-25 17:04:07.514589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2258140) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.514594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.481 [2024-07-25 17:04:07.514598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22582c0) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.514603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.481 [2024-07-25 17:04:07.514613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.514617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.514620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21d4ec0) 00:24:47.481 [2024-07-25 17:04:07.514628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.514642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22582c0, cid 3, qid 0 00:24:47.481 [2024-07-25 17:04:07.514963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.514970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.514974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.514977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22582c0) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.514985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.514988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.514992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21d4ec0) 00:24:47.481 [2024-07-25 17:04:07.514999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.515012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22582c0, cid 3, qid 0 00:24:47.481 [2024-07-25 17:04:07.519212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.519220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.519224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.519228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22582c0) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.519233] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:47.481 [2024-07-25 17:04:07.519237] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:47.481 [2024-07-25 17:04:07.519247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.519251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.519254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21d4ec0) 00:24:47.481 [2024-07-25 17:04:07.519261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.519277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22582c0, cid 3, qid 0 00:24:47.481 [2024-07-25 17:04:07.519493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.519500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.519504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.519507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22582c0) on tqpair=0x21d4ec0 00:24:47.481 [2024-07-25 17:04:07.519515] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:24:47.481 00:24:47.481 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:47.481 [2024-07-25 17:04:07.563399] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:47.481 [2024-07-25 17:04:07.563476] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524961 ] 00:24:47.481 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.481 [2024-07-25 17:04:07.594751] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:47.481 [2024-07-25 17:04:07.594795] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.481 [2024-07-25 17:04:07.594800] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.481 [2024-07-25 17:04:07.594811] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.481 [2024-07-25 17:04:07.594819] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.481 [2024-07-25 17:04:07.598227] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:47.481 [2024-07-25 17:04:07.598250] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x227eec0 0 00:24:47.481 [2024-07-25 17:04:07.606211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.481 [2024-07-25 17:04:07.606228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.481 [2024-07-25 17:04:07.606233] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.481 [2024-07-25 17:04:07.606236] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.481 [2024-07-25 17:04:07.606271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.606277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.606281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.606292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.481 [2024-07-25 17:04:07.606309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.614213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.614222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.614225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.614241] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.481 [2024-07-25 17:04:07.614247] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:47.481 [2024-07-25 17:04:07.614256] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:47.481 [2024-07-25 17:04:07.614267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.614282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.614295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.614496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.614504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.614508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.614520] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:47.481 [2024-07-25 17:04:07.614528] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:47.481 [2024-07-25 17:04:07.614535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.614550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.614562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.614795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.614801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.614805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.614813] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:47.481 [2024-07-25 17:04:07.614821] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.481 [2024-07-25 17:04:07.614828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.614835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.614841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.614851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.615076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.615082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.615086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.615094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.481 [2024-07-25 17:04:07.615103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.615121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.615132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.615301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.615308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.615311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.615319] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:47.481 [2024-07-25 17:04:07.615324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:47.481 [2024-07-25 17:04:07.615331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.481 [2024-07-25 17:04:07.615437] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:47.481 [2024-07-25 17:04:07.615440] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.481 [2024-07-25 17:04:07.615448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.615462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.615474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.615706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.615713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.615716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.615724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.481 [2024-07-25 17:04:07.615734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.615741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.615748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.615758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.615987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.615993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.615997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.616000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.616005] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.481 [2024-07-25 17:04:07.616012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:47.481 [2024-07-25 17:04:07.616020] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:47.481 [2024-07-25 17:04:07.616028] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.481 [2024-07-25 17:04:07.616036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.616040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.481 [2024-07-25 17:04:07.616047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.481 [2024-07-25 17:04:07.616058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.481 [2024-07-25 17:04:07.616385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.481 [2024-07-25 17:04:07.616393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.481 [2024-07-25 17:04:07.616396] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.616400] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=4096, cccid=0 00:24:47.481 [2024-07-25 17:04:07.616405] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2301e40) on tqpair(0x227eec0): expected_datao=0, payload_size=4096 00:24:47.481 [2024-07-25 17:04:07.616409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.616417] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.616421] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.657322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.481 [2024-07-25 17:04:07.657334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.481 [2024-07-25 17:04:07.657338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.481 [2024-07-25 17:04:07.657342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.481 [2024-07-25 17:04:07.657350] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:47.481 [2024-07-25 17:04:07.657355] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:47.481 [2024-07-25 17:04:07.657359] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:47.481 [2024-07-25 17:04:07.657363] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:47.481 [2024-07-25 17:04:07.657367] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:47.482 [2024-07-25 17:04:07.657372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.657381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.657392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.657407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.482 [2024-07-25 17:04:07.657420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.482 [2024-07-25 17:04:07.657596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.482 [2024-07-25 17:04:07.657603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.482 [2024-07-25 17:04:07.657609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.482 [2024-07-25 17:04:07.657620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.657634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.482 [2024-07-25 17:04:07.657640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.657653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.482 [2024-07-25 17:04:07.657659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.657671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.482 [2024-07-25 17:04:07.657677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.657690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.482 [2024-07-25 17:04:07.657695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.657705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.657712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.657715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.657722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.482 [2024-07-25 17:04:07.657734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301e40, cid 0, qid 0 00:24:47.482 [2024-07-25 17:04:07.657739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2301fc0, cid 1, qid 0 00:24:47.482 [2024-07-25 17:04:07.657744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302140, cid 2, qid 0 00:24:47.482 [2024-07-25 17:04:07.657749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23022c0, cid 3, qid 0 00:24:47.482 [2024-07-25 17:04:07.657754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.482 [2024-07-25 17:04:07.658000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.482 [2024-07-25 17:04:07.658007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.482 [2024-07-25 17:04:07.658010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.658014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.482 [2024-07-25 17:04:07.658019] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:47.482 [2024-07-25 17:04:07.658024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.658036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.658043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.658049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.658053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.658056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.658063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.482 [2024-07-25 17:04:07.658074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.482 [2024-07-25 17:04:07.662207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.482 [2024-07-25 17:04:07.662215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.482 [2024-07-25 17:04:07.662219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.662222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.482 [2024-07-25 17:04:07.662289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.662298] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.662306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.662309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.662316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.482 [2024-07-25 17:04:07.662328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.482 [2024-07-25 17:04:07.662573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.482 [2024-07-25 17:04:07.662581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.482 [2024-07-25 17:04:07.662584] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.662588] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=4096, cccid=4 00:24:47.482 [2024-07-25 17:04:07.662592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2302440) on tqpair(0x227eec0): expected_datao=0, payload_size=4096 00:24:47.482 [2024-07-25 17:04:07.662596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.662652] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.662656] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.704323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.482 [2024-07-25 17:04:07.704335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.482 [2024-07-25 17:04:07.704338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.704342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.482 [2024-07-25 17:04:07.704353] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:47.482 [2024-07-25 17:04:07.704367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.704377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.704389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.704393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.704400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.482 [2024-07-25 17:04:07.704412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.482 [2024-07-25 17:04:07.704621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.482 [2024-07-25 17:04:07.704627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.482 [2024-07-25 17:04:07.704631] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.704634] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=4096, cccid=4 00:24:47.482 [2024-07-25 17:04:07.704638] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2302440) on tqpair(0x227eec0): expected_datao=0, payload_size=4096 00:24:47.482 [2024-07-25 17:04:07.704643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.704697] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.704701] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.749207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.482 [2024-07-25 17:04:07.749215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.482 [2024-07-25 17:04:07.749219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.749222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.482 [2024-07-25 17:04:07.749237] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.749247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:47.482 [2024-07-25 17:04:07.749254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.749258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.482 [2024-07-25 17:04:07.749265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.482 [2024-07-25 17:04:07.749278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.482 [2024-07-25 17:04:07.749578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.482 [2024-07-25 17:04:07.749585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.482 [2024-07-25 17:04:07.749589] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.749592] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=4096, cccid=4 00:24:47.482 [2024-07-25 17:04:07.749596] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2302440) on tqpair(0x227eec0): expected_datao=0, payload_size=4096 00:24:47.482 [2024-07-25 17:04:07.749601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.749607] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.482 [2024-07-25 17:04:07.749611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.744 [2024-07-25 17:04:07.790387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.744 [2024-07-25 17:04:07.790399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.744 [2024-07-25 17:04:07.790403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.744 [2024-07-25 17:04:07.790407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.744 [2024-07-25 17:04:07.790416] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:47.744 [2024-07-25 17:04:07.790428] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:47.744 [2024-07-25 17:04:07.790437] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:47.744 [2024-07-25 17:04:07.790445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:47.744 [2024-07-25 17:04:07.790450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:47.744 [2024-07-25 17:04:07.790455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:47.744 [2024-07-25 17:04:07.790460] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:47.745 [2024-07-25 17:04:07.790464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:47.745 [2024-07-25 17:04:07.790470] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:47.745 [2024-07-25 17:04:07.790483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.790494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.790501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.790514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.745 [2024-07-25 17:04:07.790529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.745 [2024-07-25 17:04:07.790534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23025c0, cid 5, qid 0 00:24:47.745 [2024-07-25 17:04:07.790682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.790688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.790691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.790702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.790707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.790711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23025c0) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.790723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.790733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.790743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23025c0, cid 5, qid 0 00:24:47.745 [2024-07-25 17:04:07.790955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.790962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.790966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23025c0) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.790982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.790986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.790992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.791002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23025c0, cid 5, qid 0 00:24:47.745 [2024-07-25 17:04:07.791259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.791274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.791277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23025c0) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.791291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.791301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.791312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23025c0, cid 5, qid 0 00:24:47.745 [2024-07-25 17:04:07.791550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.791556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.791559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23025c0) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.791579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.791589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.791597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.791607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.791614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.791623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.791631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x227eec0) 00:24:47.745 [2024-07-25 17:04:07.791640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.745 [2024-07-25 17:04:07.791652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23025c0, cid 5, qid 0 00:24:47.745 [2024-07-25 17:04:07.791657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302440, cid 4, qid 0 00:24:47.745 [2024-07-25 17:04:07.791661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2302740, cid 6, qid 0 00:24:47.745 [2024-07-25 17:04:07.791666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23028c0, cid 7, qid 0 00:24:47.745 [2024-07-25 17:04:07.791965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.745 [2024-07-25 17:04:07.791971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.745 [2024-07-25 17:04:07.791975] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.791978] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=8192, cccid=5 00:24:47.745 [2024-07-25 17:04:07.791982] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23025c0) on tqpair(0x227eec0): expected_datao=0, payload_size=8192 00:24:47.745 [2024-07-25 17:04:07.791987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792120] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792124] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.745 [2024-07-25 17:04:07.792136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.745 [2024-07-25 17:04:07.792139] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792142] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=512, cccid=4 00:24:47.745 [2024-07-25 17:04:07.792147] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2302440) on tqpair(0x227eec0): expected_datao=0, payload_size=512 00:24:47.745 [2024-07-25 17:04:07.792151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792157] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792160] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.745 [2024-07-25 17:04:07.792172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.745 [2024-07-25 17:04:07.792175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792178] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=512, cccid=6 00:24:47.745 [2024-07-25 17:04:07.792182] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2302740) on tqpair(0x227eec0): expected_datao=0, payload_size=512 00:24:47.745 [2024-07-25 17:04:07.792187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792193] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792196] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.745 [2024-07-25 17:04:07.792213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.745 [2024-07-25 17:04:07.792216] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792220] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227eec0): datao=0, datal=4096, cccid=7 00:24:47.745 [2024-07-25 17:04:07.792224] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23028c0) on tqpair(0x227eec0): expected_datao=0, payload_size=4096 00:24:47.745 [2024-07-25 17:04:07.792228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792235] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792238] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.792251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.792254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23025c0) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.792270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.792276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.745 [2024-07-25 17:04:07.792279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.745 [2024-07-25 17:04:07.792285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302440) on tqpair=0x227eec0 00:24:47.745 [2024-07-25 17:04:07.792294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.745 [2024-07-25 17:04:07.792300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.746 [2024-07-25 17:04:07.792303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.746 [2024-07-25 17:04:07.792307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302740) on tqpair=0x227eec0 00:24:47.746 [2024-07-25 17:04:07.792314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.746 [2024-07-25 17:04:07.792320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.746 [2024-07-25 17:04:07.792323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.746 [2024-07-25 17:04:07.792327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23028c0) on tqpair=0x227eec0 00:24:47.746 ===================================================== 00:24:47.746 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.746 ===================================================== 00:24:47.746 Controller Capabilities/Features 00:24:47.746 ================================ 00:24:47.746 Vendor ID: 8086 00:24:47.746 Subsystem Vendor ID: 8086 00:24:47.746 Serial Number: SPDK00000000000001 00:24:47.746 Model Number: SPDK bdev Controller 00:24:47.746 Firmware Version: 24.09 00:24:47.746 Recommended Arb Burst: 6 00:24:47.746 IEEE OUI Identifier: e4 d2 5c 00:24:47.746 Multi-path I/O 00:24:47.746 May have multiple subsystem ports: Yes 00:24:47.746 May have multiple controllers: Yes 00:24:47.746 Associated with SR-IOV VF: No 00:24:47.746 Max Data Transfer Size: 131072 00:24:47.746 Max Number of Namespaces: 32 00:24:47.746 Max Number of I/O Queues: 127 00:24:47.746 NVMe Specification Version (VS): 1.3 00:24:47.746 NVMe Specification Version (Identify): 1.3 00:24:47.746 Maximum Queue Entries: 128 00:24:47.746 Contiguous Queues Required: Yes 00:24:47.746 Arbitration Mechanisms Supported 00:24:47.746 Weighted Round Robin: Not Supported 00:24:47.746 Vendor Specific: Not Supported 00:24:47.746 Reset Timeout: 15000 ms 00:24:47.746 Doorbell Stride: 4 bytes 00:24:47.746 NVM Subsystem Reset: Not Supported 00:24:47.746 Command Sets Supported 00:24:47.746 NVM Command Set: Supported 00:24:47.746 Boot Partition: Not Supported 00:24:47.746 Memory Page Size Minimum: 4096 bytes 00:24:47.746 Memory Page Size Maximum: 4096 bytes 00:24:47.746 Persistent Memory Region: Not Supported 00:24:47.746 Optional Asynchronous Events Supported 00:24:47.746 Namespace Attribute Notices: Supported 00:24:47.746 Firmware Activation Notices: Not Supported 00:24:47.746 ANA Change Notices: Not Supported 00:24:47.746 PLE Aggregate Log Change Notices: Not Supported 00:24:47.746 LBA Status Info Alert Notices: Not Supported 00:24:47.746 EGE Aggregate Log Change Notices: Not Supported 00:24:47.746 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.746 Zone Descriptor Change Notices: Not Supported 00:24:47.746 Discovery Log Change Notices: Not Supported 00:24:47.746 Controller Attributes 00:24:47.746 128-bit Host Identifier: Supported 00:24:47.746 Non-Operational Permissive Mode: Not Supported 00:24:47.746 NVM Sets: Not Supported 00:24:47.746 Read Recovery Levels: Not Supported 00:24:47.746 Endurance Groups: Not Supported 00:24:47.746 Predictable Latency Mode: Not Supported 00:24:47.746 Traffic Based Keep ALive: Not Supported 00:24:47.746 Namespace Granularity: Not Supported 00:24:47.746 SQ Associations: Not Supported 00:24:47.746 UUID List: Not Supported 00:24:47.746 Multi-Domain Subsystem: Not Supported 00:24:47.746 Fixed Capacity Management: Not Supported 00:24:47.746 Variable Capacity Management: Not Supported 00:24:47.746 Delete Endurance Group: Not Supported 00:24:47.746 Delete NVM Set: Not Supported 00:24:47.746 Extended LBA Formats Supported: Not Supported 00:24:47.746 Flexible Data Placement Supported: Not Supported 00:24:47.746 00:24:47.746 Controller Memory Buffer Support 00:24:47.746 ================================ 00:24:47.746 Supported: No 00:24:47.746 00:24:47.746 Persistent Memory Region Support 00:24:47.746 ================================ 00:24:47.746 Supported: No 00:24:47.746 00:24:47.746 Admin Command Set Attributes 00:24:47.746 ============================ 00:24:47.746 Security Send/Receive: Not Supported 00:24:47.746 Format NVM: Not Supported 00:24:47.746 Firmware Activate/Download: Not Supported 00:24:47.746 Namespace Management: Not Supported 00:24:47.746 Device Self-Test: Not Supported 00:24:47.746 Directives: Not Supported 00:24:47.746 NVMe-MI: Not Supported 00:24:47.746 Virtualization Management: Not Supported 00:24:47.746 Doorbell Buffer Config: Not Supported 00:24:47.746 Get LBA Status Capability: Not Supported 00:24:47.746 Command & Feature Lockdown Capability: Not Supported 00:24:47.746 Abort Command Limit: 4 00:24:47.746 Async Event Request Limit: 4 00:24:47.746 Number of Firmware Slots: N/A 00:24:47.746 Firmware Slot 1 Read-Only: N/A 00:24:47.746 Firmware Activation Without Reset: N/A 00:24:47.746 Multiple Update Detection Support: N/A 00:24:47.746 Firmware Update Granularity: No Information Provided 00:24:47.746 Per-Namespace SMART Log: No 00:24:47.746 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.746 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:47.746 Command Effects Log Page: Supported 00:24:47.746 Get Log Page Extended Data: Supported 00:24:47.746 Telemetry Log Pages: Not Supported 00:24:47.746 Persistent Event Log Pages: Not Supported 00:24:47.746 Supported Log Pages Log Page: May Support 00:24:47.746 Commands Supported & Effects Log Page: Not Supported 00:24:47.746 Feature Identifiers & Effects Log Page:May Support 00:24:47.746 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.746 Data Area 4 for Telemetry Log: Not Supported 00:24:47.746 Error Log Page Entries Supported: 128 00:24:47.746 Keep Alive: Supported 00:24:47.746 Keep Alive Granularity: 10000 ms 00:24:47.746 00:24:47.746 NVM Command Set Attributes 00:24:47.746 ========================== 00:24:47.746 Submission Queue Entry Size 00:24:47.746 Max: 64 00:24:47.746 Min: 64 00:24:47.746 Completion Queue Entry Size 00:24:47.746 Max: 16 00:24:47.746 Min: 16 00:24:47.746 Number of Namespaces: 32 00:24:47.746 Compare Command: Supported 00:24:47.746 Write Uncorrectable Command: Not Supported 00:24:47.746 Dataset Management Command: Supported 00:24:47.746 Write Zeroes Command: Supported 00:24:47.746 Set Features Save Field: Not Supported 00:24:47.746 Reservations: Supported 00:24:47.746 Timestamp: Not Supported 00:24:47.746 Copy: Supported 00:24:47.746 Volatile Write Cache: Present 00:24:47.746 Atomic Write Unit (Normal): 1 00:24:47.746 Atomic Write Unit (PFail): 1 00:24:47.746 Atomic Compare & Write Unit: 1 00:24:47.746 Fused Compare & Write: Supported 00:24:47.746 Scatter-Gather List 00:24:47.746 SGL Command Set: Supported 00:24:47.746 SGL Keyed: Supported 00:24:47.746 SGL Bit Bucket Descriptor: Not Supported 00:24:47.746 SGL Metadata Pointer: Not Supported 00:24:47.746 Oversized SGL: Not Supported 00:24:47.746 SGL Metadata Address: Not Supported 00:24:47.746 SGL Offset: Supported 00:24:47.746 Transport SGL Data Block: Not Supported 00:24:47.746 Replay Protected Memory Block: Not Supported 00:24:47.746 00:24:47.746 Firmware Slot Information 00:24:47.746 ========================= 00:24:47.746 Active slot: 1 00:24:47.746 Slot 1 Firmware Revision: 24.09 00:24:47.746 00:24:47.746 00:24:47.746 Commands Supported and Effects 00:24:47.746 ============================== 00:24:47.746 Admin Commands 00:24:47.746 -------------- 00:24:47.746 Get Log Page (02h): Supported 00:24:47.746 Identify (06h): Supported 00:24:47.746 Abort (08h): Supported 00:24:47.746 Set Features (09h): Supported 00:24:47.746 Get Features (0Ah): Supported 00:24:47.746 Asynchronous Event Request (0Ch): Supported 00:24:47.746 Keep Alive (18h): Supported 00:24:47.746 I/O Commands 00:24:47.746 ------------ 00:24:47.746 Flush (00h): Supported LBA-Change 00:24:47.746 Write (01h): Supported LBA-Change 00:24:47.746 Read (02h): Supported 00:24:47.746 Compare (05h): Supported 00:24:47.746 Write Zeroes (08h): Supported LBA-Change 00:24:47.746 Dataset Management (09h): Supported LBA-Change 00:24:47.746 Copy (19h): Supported LBA-Change 00:24:47.746 00:24:47.746 Error Log 00:24:47.746 ========= 00:24:47.746 00:24:47.746 Arbitration 00:24:47.746 =========== 00:24:47.746 Arbitration Burst: 1 00:24:47.746 00:24:47.746 Power Management 00:24:47.746 ================ 00:24:47.746 Number of Power States: 1 00:24:47.746 Current Power State: Power State #0 00:24:47.746 Power State #0: 00:24:47.746 Max Power: 0.00 W 00:24:47.746 Non-Operational State: Operational 00:24:47.747 Entry Latency: Not Reported 00:24:47.747 Exit Latency: Not Reported 00:24:47.747 Relative Read Throughput: 0 00:24:47.747 Relative Read Latency: 0 00:24:47.747 Relative Write Throughput: 0 00:24:47.747 Relative Write Latency: 0 00:24:47.747 Idle Power: Not Reported 00:24:47.747 Active Power: Not Reported 00:24:47.747 Non-Operational Permissive Mode: Not Supported 00:24:47.747 00:24:47.747 Health Information 00:24:47.747 ================== 00:24:47.747 Critical Warnings: 00:24:47.747 Available Spare Space: OK 00:24:47.747 Temperature: OK 00:24:47.747 Device Reliability: OK 00:24:47.747 Read Only: No 00:24:47.747 Volatile Memory Backup: OK 00:24:47.747 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:47.747 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:47.747 Available Spare: 0% 00:24:47.747 Available Spare Threshold: 0% 00:24:47.747 Life Percentage Used:[2024-07-25 17:04:07.792426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.792431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x227eec0) 00:24:47.747 [2024-07-25 17:04:07.792438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.747 [2024-07-25 17:04:07.792449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23028c0, cid 7, qid 0 00:24:47.747 [2024-07-25 17:04:07.792686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.747 [2024-07-25 17:04:07.792693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.747 [2024-07-25 17:04:07.792697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.792701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23028c0) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.792732] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:47.747 [2024-07-25 17:04:07.792741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301e40) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.792747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.747 [2024-07-25 17:04:07.792752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2301fc0) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.792757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.747 [2024-07-25 17:04:07.792762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2302140) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.747 [2024-07-25 17:04:07.792771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23022c0) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.792776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.747 [2024-07-25 17:04:07.792784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.792787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.792791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227eec0) 00:24:47.747 [2024-07-25 17:04:07.792798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.747 [2024-07-25 17:04:07.792811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23022c0, cid 3, qid 0 00:24:47.747 [2024-07-25 17:04:07.793043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.747 [2024-07-25 17:04:07.793050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.747 [2024-07-25 17:04:07.793053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.793057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23022c0) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.793066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.793070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.793074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227eec0) 00:24:47.747 [2024-07-25 17:04:07.793080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.747 [2024-07-25 17:04:07.793094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23022c0, cid 3, qid 0 00:24:47.747 [2024-07-25 17:04:07.797207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.747 [2024-07-25 17:04:07.797216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.747 [2024-07-25 17:04:07.797220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.797223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23022c0) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.797228] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:47.747 [2024-07-25 17:04:07.797233] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:47.747 [2024-07-25 17:04:07.797243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.797247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.797250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227eec0) 00:24:47.747 [2024-07-25 17:04:07.797257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.747 [2024-07-25 17:04:07.797269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23022c0, cid 3, qid 0 00:24:47.747 [2024-07-25 17:04:07.797396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.747 [2024-07-25 17:04:07.797403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.747 [2024-07-25 17:04:07.797406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.747 [2024-07-25 17:04:07.797410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23022c0) on tqpair=0x227eec0 00:24:47.747 [2024-07-25 17:04:07.797418] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:24:47.747 0% 00:24:47.747 Data Units Read: 0 00:24:47.747 Data Units Written: 0 00:24:47.747 Host Read Commands: 0 00:24:47.747 Host Write Commands: 0 00:24:47.747 Controller Busy Time: 0 minutes 00:24:47.747 Power Cycles: 0 00:24:47.747 Power On Hours: 0 hours 00:24:47.747 Unsafe Shutdowns: 0 00:24:47.747 Unrecoverable Media Errors: 0 00:24:47.747 Lifetime Error Log Entries: 0 00:24:47.747 Warning Temperature Time: 0 minutes 00:24:47.747 Critical Temperature Time: 0 minutes 00:24:47.747 00:24:47.747 Number of Queues 00:24:47.747 ================ 00:24:47.747 Number of I/O Submission Queues: 127 00:24:47.747 Number of I/O Completion Queues: 127 00:24:47.747 00:24:47.747 Active Namespaces 00:24:47.747 ================= 00:24:47.747 Namespace ID:1 00:24:47.747 Error Recovery Timeout: Unlimited 00:24:47.747 Command Set Identifier: NVM (00h) 00:24:47.747 Deallocate: Supported 00:24:47.747 Deallocated/Unwritten Error: Not Supported 00:24:47.747 Deallocated Read Value: Unknown 00:24:47.747 Deallocate in Write Zeroes: Not Supported 00:24:47.747 Deallocated Guard Field: 0xFFFF 00:24:47.747 Flush: Supported 00:24:47.747 Reservation: Supported 00:24:47.747 Namespace Sharing Capabilities: Multiple Controllers 00:24:47.747 Size (in LBAs): 131072 (0GiB) 00:24:47.747 Capacity (in LBAs): 131072 (0GiB) 00:24:47.747 Utilization (in LBAs): 131072 (0GiB) 00:24:47.747 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:47.747 EUI64: ABCDEF0123456789 00:24:47.747 UUID: 6419dd7a-9558-4e51-91cf-3d4ea9e0ef3f 00:24:47.747 Thin Provisioning: Not Supported 00:24:47.747 Per-NS Atomic Units: Yes 00:24:47.747 Atomic Boundary Size (Normal): 0 00:24:47.747 Atomic Boundary Size (PFail): 0 00:24:47.747 Atomic Boundary Offset: 0 00:24:47.747 Maximum Single Source Range Length: 65535 00:24:47.747 Maximum Copy Length: 65535 00:24:47.747 Maximum Source Range Count: 1 00:24:47.747 NGUID/EUI64 Never Reused: No 00:24:47.747 Namespace Write Protected: No 00:24:47.747 Number of LBA Formats: 1 00:24:47.747 Current LBA Format: LBA Format #00 00:24:47.747 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:47.747 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.747 rmmod nvme_tcp 00:24:47.747 rmmod nvme_fabrics 00:24:47.747 rmmod nvme_keyring 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1524674 ']' 00:24:47.747 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1524674 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1524674 ']' 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1524674 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1524674 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1524674' 00:24:47.748 killing process with pid 1524674 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1524674 00:24:47.748 17:04:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1524674 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.009 17:04:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.925 17:04:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.925 00:24:49.925 real 0m10.682s 00:24:49.925 user 0m7.949s 00:24:49.925 sys 0m5.475s 00:24:49.925 17:04:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.925 17:04:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:49.925 ************************************ 00:24:49.925 END TEST nvmf_identify 00:24:49.925 ************************************ 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.187 ************************************ 00:24:50.187 START TEST nvmf_perf 00:24:50.187 ************************************ 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:50.187 * Looking for test storage... 00:24:50.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.187 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.188 17:04:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:58.338 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:58.338 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:58.338 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:58.338 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:24:58.338 00:24:58.338 --- 10.0.0.2 ping statistics --- 00:24:58.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.338 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:24:58.338 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.472 ms 00:24:58.339 00:24:58.339 --- 10.0.0.1 ping statistics --- 00:24:58.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.339 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1529261 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1529261 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1529261 ']' 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.339 17:04:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.339 [2024-07-25 17:04:17.675954] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:24:58.339 [2024-07-25 17:04:17.676006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.339 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.339 [2024-07-25 17:04:17.741774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.339 [2024-07-25 17:04:17.806809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.339 [2024-07-25 17:04:17.806847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.339 [2024-07-25 17:04:17.806854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.339 [2024-07-25 17:04:17.806860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.339 [2024-07-25 17:04:17.806866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.339 [2024-07-25 17:04:17.807005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.339 [2024-07-25 17:04:17.807116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.339 [2024-07-25 17:04:17.807258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.339 [2024-07-25 17:04:17.807272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:58.339 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:58.912 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:58.912 17:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:58.912 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:58.912 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:59.173 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:59.173 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:59.173 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:59.173 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:59.173 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:59.438 [2024-07-25 17:04:19.465642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.438 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.438 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:59.438 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.736 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:59.736 17:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:59.997 17:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.997 [2024-07-25 17:04:20.144228] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.997 17:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.259 17:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:00.259 17:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:00.259 17:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:00.259 17:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:01.647 Initializing NVMe Controllers 00:25:01.647 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:01.647 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:01.647 Initialization complete. Launching workers. 00:25:01.647 ======================================================== 00:25:01.647 Latency(us) 00:25:01.647 Device Information : IOPS MiB/s Average min max 00:25:01.647 PCIE (0000:65:00.0) NSID 1 from core 0: 79105.62 309.01 403.97 13.48 8194.75 00:25:01.647 ======================================================== 00:25:01.647 Total : 79105.62 309.01 403.97 13.48 8194.75 00:25:01.647 00:25:01.647 17:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.647 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.591 Initializing NVMe Controllers 00:25:02.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:02.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:02.591 Initialization complete. Launching workers. 00:25:02.591 ======================================================== 00:25:02.591 Latency(us) 00:25:02.591 Device Information : IOPS MiB/s Average min max 00:25:02.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.37 10418.09 591.62 45487.44 00:25:02.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20710.58 6801.78 47906.66 00:25:02.591 ======================================================== 00:25:02.591 Total : 146.00 0.57 13942.91 591.62 47906.66 00:25:02.591 00:25:02.591 17:04:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:02.852 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.238 Initializing NVMe Controllers 00:25:04.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:04.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:04.238 Initialization complete. Launching workers. 00:25:04.238 ======================================================== 00:25:04.238 Latency(us) 00:25:04.238 Device Information : IOPS MiB/s Average min max 00:25:04.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7728.84 30.19 4140.85 753.83 8678.73 00:25:04.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3575.69 13.97 9046.12 4904.12 47872.14 00:25:04.238 ======================================================== 00:25:04.238 Total : 11304.53 44.16 5692.42 753.83 47872.14 00:25:04.238 00:25:04.238 17:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:04.238 17:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:04.238 17:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.238 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.785 Initializing NVMe Controllers 00:25:06.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.785 Controller IO queue size 128, less than required. 00:25:06.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.785 Controller IO queue size 128, less than required. 00:25:06.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:06.785 Initialization complete. Launching workers. 00:25:06.785 ======================================================== 00:25:06.785 Latency(us) 00:25:06.785 Device Information : IOPS MiB/s Average min max 00:25:06.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 806.50 201.62 169050.36 86142.11 285100.81 00:25:06.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.00 145.00 233112.94 70772.96 366941.59 00:25:06.785 ======================================================== 00:25:06.785 Total : 1386.50 346.62 195848.99 70772.96 366941.59 00:25:06.785 00:25:06.785 17:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:06.785 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.047 No valid NVMe controllers or AIO or URING devices found 00:25:07.047 Initializing NVMe Controllers 00:25:07.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.047 Controller IO queue size 128, less than required. 00:25:07.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:07.047 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:07.047 Controller IO queue size 128, less than required. 00:25:07.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:07.047 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:07.047 WARNING: Some requested NVMe devices were skipped 00:25:07.047 17:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:07.047 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.615 Initializing NVMe Controllers 00:25:09.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.615 Controller IO queue size 128, less than required. 00:25:09.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.615 Controller IO queue size 128, less than required. 00:25:09.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:09.615 Initialization complete. Launching workers. 00:25:09.615 00:25:09.615 ==================== 00:25:09.615 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:09.615 TCP transport: 00:25:09.615 polls: 40334 00:25:09.615 idle_polls: 14187 00:25:09.615 sock_completions: 26147 00:25:09.615 nvme_completions: 3489 00:25:09.615 submitted_requests: 5230 00:25:09.615 queued_requests: 1 00:25:09.615 00:25:09.615 ==================== 00:25:09.615 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:09.615 TCP transport: 00:25:09.615 polls: 40315 00:25:09.615 idle_polls: 15958 00:25:09.615 sock_completions: 24357 00:25:09.615 nvme_completions: 3601 00:25:09.615 submitted_requests: 5462 00:25:09.615 queued_requests: 1 00:25:09.615 ======================================================== 00:25:09.615 Latency(us) 00:25:09.615 Device Information : IOPS MiB/s Average min max 00:25:09.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 871.74 217.94 152893.23 79340.58 257126.01 00:25:09.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 899.73 224.93 144708.23 86259.61 220291.37 00:25:09.615 ======================================================== 00:25:09.615 Total : 1771.48 442.87 148736.07 79340.58 257126.01 00:25:09.615 00:25:09.615 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:09.615 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.876 17:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.876 rmmod nvme_tcp 00:25:09.876 rmmod nvme_fabrics 00:25:09.876 rmmod nvme_keyring 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1529261 ']' 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1529261 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1529261 ']' 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1529261 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529261 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529261' 00:25:09.876 killing process with pid 1529261 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1529261 00:25:09.876 17:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1529261 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.424 17:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.338 00:25:14.338 real 0m23.914s 00:25:14.338 user 0m58.657s 00:25:14.338 sys 0m7.694s 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.338 ************************************ 00:25:14.338 END TEST nvmf_perf 00:25:14.338 ************************************ 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.338 ************************************ 00:25:14.338 START TEST nvmf_fio_host 00:25:14.338 ************************************ 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:14.338 * Looking for test storage... 00:25:14.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.338 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.339 17:04:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.931 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:20.932 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:20.932 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:20.932 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:20.932 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.932 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:25:21.193 00:25:21.193 --- 10.0.0.2 ping statistics --- 00:25:21.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.193 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:25:21.193 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:25:21.454 00:25:21.454 --- 10.0.0.1 ping statistics --- 00:25:21.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.454 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.454 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1536111 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1536111 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1536111 ']' 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.455 17:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.455 [2024-07-25 17:04:41.589679] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:25:21.455 [2024-07-25 17:04:41.589745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.455 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.455 [2024-07-25 17:04:41.662637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.716 [2024-07-25 17:04:41.738461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.716 [2024-07-25 17:04:41.738500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.716 [2024-07-25 17:04:41.738508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.716 [2024-07-25 17:04:41.738515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.716 [2024-07-25 17:04:41.738520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.716 [2024-07-25 17:04:41.738695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.716 [2024-07-25 17:04:41.742219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.716 [2024-07-25 17:04:41.742572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.716 [2024-07-25 17:04:41.742573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.288 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:22.288 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:22.288 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:22.288 [2024-07-25 17:04:42.504395] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.288 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:22.288 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.288 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.549 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:22.549 Malloc1 00:25:22.549 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.810 17:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:23.071 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.071 [2024-07-25 17:04:43.226035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.071 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.331 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:23.332 17:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:23.601 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:23.601 fio-3.35 00:25:23.601 Starting 1 thread 00:25:23.601 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.213 00:25:26.213 test: (groupid=0, jobs=1): err= 0: pid=1536853: Thu Jul 25 17:04:46 2024 00:25:26.213 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec) 00:25:26.213 slat (usec): min=2, max=279, avg= 2.19, stdev= 2.42 00:25:26.213 clat (usec): min=3032, max=8592, avg=5362.35, stdev=748.09 00:25:26.213 lat (usec): min=3034, max=8594, avg=5364.54, stdev=748.12 00:25:26.213 clat percentiles (usec): 00:25:26.213 | 1.00th=[ 3916], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 4817], 00:25:26.213 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5407], 00:25:26.213 | 70.00th=[ 5604], 80.00th=[ 5866], 90.00th=[ 6325], 95.00th=[ 6849], 00:25:26.213 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[ 8225], 99.95th=[ 8356], 00:25:26.213 | 99.99th=[ 8455] 00:25:26.213 bw ( KiB/s): min=49736, max=56440, per=99.92%, avg=54610.00, stdev=3255.89, samples=4 00:25:26.213 iops : min=12434, max=14110, avg=13652.50, stdev=813.97, samples=4 00:25:26.213 write: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(107MiB/2004msec); 0 zone resets 00:25:26.213 slat (usec): min=2, max=278, avg= 2.26, stdev= 1.85 00:25:26.213 clat (usec): min=2234, max=7430, avg=3955.42, stdev=631.60 00:25:26.213 lat (usec): min=2237, max=7432, avg=3957.68, stdev=631.69 00:25:26.213 clat percentiles (usec): 00:25:26.213 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3523], 00:25:26.213 | 30.00th=[ 3687], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4047], 00:25:26.213 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 5014], 00:25:26.213 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 6718], 99.95th=[ 6915], 00:25:26.213 | 99.99th=[ 7373] 00:25:26.213 bw ( KiB/s): min=50280, max=56352, per=100.00%, avg=54578.00, stdev=2876.93, samples=4 00:25:26.213 iops : min=12570, max=14088, avg=13644.50, stdev=719.23, samples=4 00:25:26.213 lat (msec) : 4=28.05%, 10=71.95% 00:25:26.213 cpu : usr=69.15%, sys=24.11%, ctx=12, majf=0, minf=6 00:25:26.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:26.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:26.213 issued rwts: total=27382,27339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:26.213 00:25:26.213 Run status group 0 (all jobs): 00:25:26.213 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:25:26.213 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2004-2004msec 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.213 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.214 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:26.214 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:26.214 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:26.214 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:26.214 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:26.214 17:04:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:26.486 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:26.486 fio-3.35 00:25:26.486 Starting 1 thread 00:25:26.486 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.034 00:25:29.034 test: (groupid=0, jobs=1): err= 0: pid=1537480: Thu Jul 25 17:04:48 2024 00:25:29.034 read: IOPS=8449, BW=132MiB/s (138MB/s)(265MiB/2005msec) 00:25:29.034 slat (usec): min=3, max=111, avg= 3.61, stdev= 1.52 00:25:29.034 clat (usec): min=2667, max=54401, avg=9333.86, stdev=4306.64 00:25:29.034 lat (usec): min=2671, max=54404, avg=9337.47, stdev=4306.85 00:25:29.034 clat percentiles (usec): 00:25:29.034 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6849], 00:25:29.034 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:25:29.034 | 70.00th=[10028], 80.00th=[11076], 90.00th=[12649], 95.00th=[14353], 00:25:29.034 | 99.00th=[19792], 99.50th=[45876], 99.90th=[53740], 99.95th=[53740], 00:25:29.034 | 99.99th=[54264] 00:25:29.034 bw ( KiB/s): min=54784, max=81024, per=51.41%, avg=69504.00, stdev=13310.77, samples=4 00:25:29.034 iops : min= 3424, max= 5064, avg=4344.00, stdev=831.92, samples=4 00:25:29.034 write: IOPS=5050, BW=78.9MiB/s (82.7MB/s)(142MiB/1794msec); 0 zone resets 00:25:29.034 slat (usec): min=39, max=456, avg=41.18, stdev= 9.23 00:25:29.034 clat (usec): min=2576, max=22014, avg=9893.65, stdev=2264.85 00:25:29.034 lat (usec): min=2616, max=22147, avg=9934.83, stdev=2268.93 00:25:29.034 clat percentiles (usec): 00:25:29.034 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8225], 00:25:29.034 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:25:29.034 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11994], 95.00th=[13042], 00:25:29.034 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21890], 99.95th=[21890], 00:25:29.034 | 99.99th=[21890] 00:25:29.034 bw ( KiB/s): min=57024, max=84128, per=89.26%, avg=72128.00, stdev=13307.46, samples=4 00:25:29.035 iops : min= 3564, max= 5258, avg=4508.00, stdev=831.72, samples=4 00:25:29.035 lat (msec) : 4=0.33%, 10=65.72%, 20=32.94%, 50=0.91%, 100=0.11% 00:25:29.035 cpu : usr=84.88%, sys=10.73%, ctx=13, majf=0, minf=13 00:25:29.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:29.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:29.035 issued rwts: total=16941,9060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:29.035 00:25:29.035 Run status group 0 (all jobs): 00:25:29.035 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (278MB), run=2005-2005msec 00:25:29.035 WRITE: bw=78.9MiB/s (82.7MB/s), 78.9MiB/s-78.9MiB/s (82.7MB/s-82.7MB/s), io=142MiB (148MB), run=1794-1794msec 00:25:29.035 17:04:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.035 rmmod nvme_tcp 00:25:29.035 rmmod nvme_fabrics 00:25:29.035 rmmod nvme_keyring 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1536111 ']' 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1536111 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1536111 ']' 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1536111 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536111 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536111' 00:25:29.035 killing process with pid 1536111 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1536111 00:25:29.035 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1536111 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.296 17:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.213 17:04:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.213 00:25:31.213 real 0m17.231s 00:25:31.213 user 1m8.095s 00:25:31.213 sys 0m7.242s 00:25:31.213 17:04:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.213 17:04:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.213 ************************************ 00:25:31.213 END TEST nvmf_fio_host 00:25:31.213 ************************************ 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.475 ************************************ 00:25:31.475 START TEST nvmf_failover 00:25:31.475 ************************************ 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:31.475 * Looking for test storage... 00:25:31.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.475 17:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:39.641 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:39.641 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:39.641 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:39.641 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.641 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:25:39.642 00:25:39.642 --- 10.0.0.2 ping statistics --- 00:25:39.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.642 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:25:39.642 00:25:39.642 --- 10.0.0.1 ping statistics --- 00:25:39.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.642 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1542021 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1542021 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1542021 ']' 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.642 17:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.642 [2024-07-25 17:04:58.991623] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:25:39.642 [2024-07-25 17:04:58.991673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.642 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.642 [2024-07-25 17:04:59.077393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:39.642 [2024-07-25 17:04:59.165054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.642 [2024-07-25 17:04:59.165117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.642 [2024-07-25 17:04:59.165125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.642 [2024-07-25 17:04:59.165132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.642 [2024-07-25 17:04:59.165139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.642 [2024-07-25 17:04:59.165295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.642 [2024-07-25 17:04:59.165476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.642 [2024-07-25 17:04:59.165476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.642 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:39.903 [2024-07-25 17:04:59.947385] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.903 17:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:39.903 Malloc0 00:25:39.903 17:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.164 17:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:40.426 17:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.426 [2024-07-25 17:05:00.653903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.426 17:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:40.688 [2024-07-25 17:05:00.822326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.688 17:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:40.949 [2024-07-25 17:05:00.986846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1542519 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1542519 /var/tmp/bdevperf.sock 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1542519 ']' 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:40.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.949 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.894 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.894 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:41.894 17:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.894 NVMe0n1 00:25:41.894 17:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.155 00:25:42.155 17:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1542717 00:25:42.155 17:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:42.155 17:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.097 17:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.358 [2024-07-25 17:05:03.496116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.358 [2024-07-25 17:05:03.496258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 [2024-07-25 17:05:03.496352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67cb80 is same with the state(5) to be set 00:25:43.359 17:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:46.661 17:05:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.661 00:25:46.922 17:05:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:46.922 [2024-07-25 17:05:07.097129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 [2024-07-25 17:05:07.097290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d990 is same with the state(5) to be set 00:25:46.922 17:05:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:50.226 17:05:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.226 [2024-07-25 17:05:10.274416] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.226 17:05:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:51.170 17:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:51.432 [2024-07-25 17:05:11.456532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.432 [2024-07-25 17:05:11.456932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 [2024-07-25 17:05:11.456936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 [2024-07-25 17:05:11.456941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 [2024-07-25 17:05:11.456946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 [2024-07-25 17:05:11.456952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 [2024-07-25 17:05:11.456956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 [2024-07-25 17:05:11.456962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e870 is same with the state(5) to be set 00:25:51.433 17:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1542717 00:25:58.073 0 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1542519 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1542519 ']' 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1542519 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542519 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542519' 00:25:58.073 killing process with pid 1542519 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1542519 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1542519 00:25:58.073 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.073 [2024-07-25 17:05:01.056381] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:25:58.073 [2024-07-25 17:05:01.056449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542519 ] 00:25:58.073 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.073 [2024-07-25 17:05:01.121454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.073 [2024-07-25 17:05:01.185074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.073 Running I/O for 15 seconds... 00:25:58.073 [2024-07-25 17:05:03.496724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.496986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.496993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.497003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.497010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.497020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.497027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.497037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.497044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.497054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.497061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.497071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.497078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.073 [2024-07-25 17:05:03.497087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.073 [2024-07-25 17:05:03.497094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.074 [2024-07-25 17:05:03.497405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.074 [2024-07-25 17:05:03.497649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.074 [2024-07-25 17:05:03.497657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.497926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.497942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.497958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.497974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.497983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.497990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.498007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.498025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.498041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.498058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.075 [2024-07-25 17:05:03.498189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.075 [2024-07-25 17:05:03.498210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.075 [2024-07-25 17:05:03.498219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.076 [2024-07-25 17:05:03.498491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.076 [2024-07-25 17:05:03.498508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.076 [2024-07-25 17:05:03.498524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.076 [2024-07-25 17:05:03.498781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.076 [2024-07-25 17:05:03.498792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:03.498912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.077 [2024-07-25 17:05:03.498941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.077 [2024-07-25 17:05:03.498949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:25:58.077 [2024-07-25 17:05:03.498957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.498996] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd5b2c0 was disconnected and freed. reset controller. 00:25:58.077 [2024-07-25 17:05:03.499005] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:58.077 [2024-07-25 17:05:03.499025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.077 [2024-07-25 17:05:03.499034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.499042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.077 [2024-07-25 17:05:03.499050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.499058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.077 [2024-07-25 17:05:03.499065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.499074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.077 [2024-07-25 17:05:03.499081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:03.499088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.077 [2024-07-25 17:05:03.502622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.077 [2024-07-25 17:05:03.502647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5eef0 (9): Bad file descriptor 00:25:58.077 [2024-07-25 17:05:03.543418] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:58.077 [2024-07-25 17:05:07.097571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-07-25 17:05:07.097912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.077 [2024-07-25 17:05:07.097922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.077 [2024-07-25 17:05:07.097929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.097939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.097946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.097956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.097964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.097981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.097990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.097997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.078 [2024-07-25 17:05:07.098366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.078 [2024-07-25 17:05:07.098372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.079 [2024-07-25 17:05:07.098923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.079 [2024-07-25 17:05:07.098932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.098942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.098949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.098958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.098965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.098975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.098982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.098991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.098998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.080 [2024-07-25 17:05:07.099341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.080 [2024-07-25 17:05:07.099350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.081 [2024-07-25 17:05:07.099543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.081 [2024-07-25 17:05:07.099678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.081 [2024-07-25 17:05:07.099709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:25:58.081 [2024-07-25 17:05:07.099717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.081 [2024-07-25 17:05:07.099727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.082 [2024-07-25 17:05:07.099733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.082 [2024-07-25 17:05:07.099740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:25:58.082 [2024-07-25 17:05:07.099747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.082 [2024-07-25 17:05:07.099760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.082 [2024-07-25 17:05:07.099766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:25:58.082 [2024-07-25 17:05:07.099773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.082 [2024-07-25 17:05:07.099787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.082 [2024-07-25 17:05:07.099793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:25:58.082 [2024-07-25 17:05:07.099801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.082 [2024-07-25 17:05:07.099814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.082 [2024-07-25 17:05:07.099821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:25:58.082 [2024-07-25 17:05:07.099829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.082 [2024-07-25 17:05:07.099844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.082 [2024-07-25 17:05:07.099850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:25:58.082 [2024-07-25 17:05:07.099857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.082 [2024-07-25 17:05:07.099870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.082 [2024-07-25 17:05:07.099876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:25:58.082 [2024-07-25 17:05:07.099883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099919] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd8d940 was disconnected and freed. reset controller. 00:25:58.082 [2024-07-25 17:05:07.099928] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:58.082 [2024-07-25 17:05:07.099948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.082 [2024-07-25 17:05:07.099956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.082 [2024-07-25 17:05:07.099973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.082 [2024-07-25 17:05:07.099988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.099997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.082 [2024-07-25 17:05:07.100005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:07.100013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.082 [2024-07-25 17:05:07.103514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.082 [2024-07-25 17:05:07.103540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5eef0 (9): Bad file descriptor 00:25:58.082 [2024-07-25 17:05:07.281381] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:58.082 [2024-07-25 17:05:11.459047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.082 [2024-07-25 17:05:11.459216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.082 [2024-07-25 17:05:11.459223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.083 [2024-07-25 17:05:11.459563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.083 [2024-07-25 17:05:11.459649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.083 [2024-07-25 17:05:11.459658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.084 [2024-07-25 17:05:11.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.459983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.459992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.460000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.460009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.460016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.460025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.460033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.460042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.460049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.084 [2024-07-25 17:05:11.460059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.084 [2024-07-25 17:05:11.460066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.085 [2024-07-25 17:05:11.460363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.085 [2024-07-25 17:05:11.460370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.086 [2024-07-25 17:05:11.460669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.086 [2024-07-25 17:05:11.460699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115344 len:8 PRP1 0x0 PRP2 0x0 00:25:58.086 [2024-07-25 17:05:11.460707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.086 [2024-07-25 17:05:11.460723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.086 [2024-07-25 17:05:11.460729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115352 len:8 PRP1 0x0 PRP2 0x0 00:25:58.086 [2024-07-25 17:05:11.460737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.086 [2024-07-25 17:05:11.460751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.086 [2024-07-25 17:05:11.460757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115360 len:8 PRP1 0x0 PRP2 0x0 00:25:58.086 [2024-07-25 17:05:11.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.086 [2024-07-25 17:05:11.460778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.086 [2024-07-25 17:05:11.460785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115368 len:8 PRP1 0x0 PRP2 0x0 00:25:58.086 [2024-07-25 17:05:11.460792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.086 [2024-07-25 17:05:11.460799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.086 [2024-07-25 17:05:11.460805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115376 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115384 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115392 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115400 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115408 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115416 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.460975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115424 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.460982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.460990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.460995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115432 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114824 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114832 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114840 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114848 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.087 [2024-07-25 17:05:11.461192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114856 len:8 PRP1 0x0 PRP2 0x0 00:25:58.087 [2024-07-25 17:05:11.461203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.087 [2024-07-25 17:05:11.461211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.087 [2024-07-25 17:05:11.461218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.461224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115440 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.461231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.461239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.461245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.461252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115448 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.461259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.461269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.461275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.461281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115456 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.461289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.461297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.461303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.461309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115464 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115472 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115480 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115488 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115496 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115504 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115512 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115520 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115528 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115536 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.088 [2024-07-25 17:05:11.472494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115544 len:8 PRP1 0x0 PRP2 0x0 00:25:58.088 [2024-07-25 17:05:11.472501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.088 [2024-07-25 17:05:11.472508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:58.088 [2024-07-25 17:05:11.472515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:58.089 [2024-07-25 17:05:11.472522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115552 len:8 PRP1 0x0 PRP2 0x0 00:25:58.089 [2024-07-25 17:05:11.472530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.089 [2024-07-25 17:05:11.472571] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd8f2a0 was disconnected and freed. reset controller. 00:25:58.089 [2024-07-25 17:05:11.472581] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:58.089 [2024-07-25 17:05:11.472609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.089 [2024-07-25 17:05:11.472618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.089 [2024-07-25 17:05:11.472629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.089 [2024-07-25 17:05:11.472636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.089 [2024-07-25 17:05:11.472644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.089 [2024-07-25 17:05:11.472651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.089 [2024-07-25 17:05:11.472659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.089 [2024-07-25 17:05:11.472669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.089 [2024-07-25 17:05:11.472678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.089 [2024-07-25 17:05:11.472719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5eef0 (9): Bad file descriptor 00:25:58.089 [2024-07-25 17:05:11.476242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.089 [2024-07-25 17:05:11.504234] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:58.089 00:25:58.089 Latency(us) 00:25:58.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.089 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:58.089 Verification LBA range: start 0x0 length 0x4000 00:25:58.089 NVMe0n1 : 15.01 11701.92 45.71 605.27 0.00 10372.51 1338.03 20425.39 00:25:58.089 =================================================================================================================== 00:25:58.089 Total : 11701.92 45.71 605.27 0.00 10372.51 1338.03 20425.39 00:25:58.089 Received shutdown signal, test time was about 15.000000 seconds 00:25:58.089 00:25:58.089 Latency(us) 00:25:58.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.089 =================================================================================================================== 00:25:58.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1545727 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1545727 /var/tmp/bdevperf.sock 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1545727 ']' 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.089 17:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:58.351 17:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.351 17:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:58.351 17:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:58.612 [2024-07-25 17:05:18.662867] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.612 17:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:58.612 [2024-07-25 17:05:18.835290] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:58.612 17:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.185 NVMe0n1 00:25:59.185 17:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.447 00:25:59.447 17:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.708 00:25:59.708 17:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.708 17:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:59.970 17:05:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.231 17:05:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:03.536 17:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.536 17:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:03.536 17:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1546829 00:26:03.536 17:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1546829 00:26:03.536 17:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:04.480 0 00:26:04.480 17:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.480 [2024-07-25 17:05:17.747634] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:26:04.480 [2024-07-25 17:05:17.747692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545727 ] 00:26:04.480 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.480 [2024-07-25 17:05:17.806197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.480 [2024-07-25 17:05:17.868159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.480 [2024-07-25 17:05:20.245174] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:04.480 [2024-07-25 17:05:20.245239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.480 [2024-07-25 17:05:20.245252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.480 [2024-07-25 17:05:20.245264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.480 [2024-07-25 17:05:20.245272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.480 [2024-07-25 17:05:20.245280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.480 [2024-07-25 17:05:20.245288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.480 [2024-07-25 17:05:20.245296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.480 [2024-07-25 17:05:20.245303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.480 [2024-07-25 17:05:20.245317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.480 [2024-07-25 17:05:20.245353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f80ef0 (9): Bad file descriptor 00:26:04.480 [2024-07-25 17:05:20.245369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.480 [2024-07-25 17:05:20.255170] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:04.480 Running I/O for 1 seconds... 00:26:04.480 00:26:04.480 Latency(us) 00:26:04.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.480 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:04.480 Verification LBA range: start 0x0 length 0x4000 00:26:04.480 NVMe0n1 : 1.00 12328.37 48.16 0.00 0.00 10324.30 1358.51 19333.12 00:26:04.480 =================================================================================================================== 00:26:04.480 Total : 12328.37 48.16 0.00 0.00 10324.30 1358.51 19333.12 00:26:04.480 17:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:04.480 17:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:04.480 17:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:04.742 17:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:04.742 17:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:05.003 17:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:05.003 17:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1545727 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1545727 ']' 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1545727 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545727 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545727' 00:26:08.309 killing process with pid 1545727 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1545727 00:26:08.309 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1545727 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.571 rmmod nvme_tcp 00:26:08.571 rmmod nvme_fabrics 00:26:08.571 rmmod nvme_keyring 00:26:08.571 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1542021 ']' 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1542021 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1542021 ']' 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1542021 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542021 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542021' 00:26:08.833 killing process with pid 1542021 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1542021 00:26:08.833 17:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1542021 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.833 17:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.383 00:26:11.383 real 0m39.571s 00:26:11.383 user 2m2.133s 00:26:11.383 sys 0m8.124s 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:11.383 ************************************ 00:26:11.383 END TEST nvmf_failover 00:26:11.383 ************************************ 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.383 ************************************ 00:26:11.383 START TEST nvmf_host_discovery 00:26:11.383 ************************************ 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:11.383 * Looking for test storage... 00:26:11.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.383 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.384 17:05:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:19.537 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:19.537 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.537 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:19.538 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:19.538 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:26:19.538 00:26:19.538 --- 10.0.0.2 ping statistics --- 00:26:19.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.538 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.452 ms 00:26:19.538 00:26:19.538 --- 10.0.0.1 ping statistics --- 00:26:19.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.538 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1552076 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1552076 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1552076 ']' 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 17:05:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:19.538 [2024-07-25 17:05:38.681688] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:26:19.538 [2024-07-25 17:05:38.681747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.538 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.538 [2024-07-25 17:05:38.767526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.538 [2024-07-25 17:05:38.835293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.538 [2024-07-25 17:05:38.835336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.538 [2024-07-25 17:05:38.835344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.538 [2024-07-25 17:05:38.835350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.538 [2024-07-25 17:05:38.835356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.538 [2024-07-25 17:05:38.835375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 [2024-07-25 17:05:39.504953] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 [2024-07-25 17:05:39.517164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.538 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 null0 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.539 null1 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1552260 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1552260 /tmp/host.sock 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1552260 ']' 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:19.539 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.539 17:05:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:19.539 [2024-07-25 17:05:39.600013] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:26:19.539 [2024-07-25 17:05:39.600081] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552260 ] 00:26:19.539 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.539 [2024-07-25 17:05:39.661820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.539 [2024-07-25 17:05:39.727284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:20.112 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:20.374 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.636 [2024-07-25 17:05:40.716330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.636 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:20.637 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.898 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:26:20.898 17:05:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:21.159 [2024-07-25 17:05:41.416552] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:21.159 [2024-07-25 17:05:41.416575] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:21.159 [2024-07-25 17:05:41.416591] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:21.420 [2024-07-25 17:05:41.545008] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:21.420 [2024-07-25 17:05:41.610891] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:21.420 [2024-07-25 17:05:41.610915] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.714 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:21.988 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.988 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.989 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:21.989 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:21.989 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:21.989 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:21.989 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:21.989 17:05:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.989 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.251 [2024-07-25 17:05:42.448800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:22.251 [2024-07-25 17:05:42.449014] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:22.251 [2024-07-25 17:05:42.449041] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.251 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:22.513 [2024-07-25 17:05:42.577803] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:22.513 17:05:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:22.513 [2024-07-25 17:05:42.681829] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:22.513 [2024-07-25 17:05:42.681851] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:22.513 [2024-07-25 17:05:42.681857] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:23.455 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.456 [2024-07-25 17:05:43.716634] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:23.456 [2024-07-25 17:05:43.716657] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.456 [2024-07-25 17:05:43.719918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.456 [2024-07-25 17:05:43.719937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.456 [2024-07-25 17:05:43.719946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.456 [2024-07-25 17:05:43.719954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.456 [2024-07-25 17:05:43.719962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.456 [2024-07-25 17:05:43.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.456 [2024-07-25 17:05:43.719977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.456 [2024-07-25 17:05:43.719984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.456 [2024-07-25 17:05:43.719992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.456 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.717 [2024-07-25 17:05:43.729930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.739970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.740559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.740597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.740608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.740627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.740653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.740661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.740670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.740685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.718 [2024-07-25 17:05:43.750028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.750535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.750573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.750585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.750604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.750616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.750623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.750631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.750647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 [2024-07-25 17:05:43.760085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.760560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.760576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.760588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.760600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.760610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.760617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.760624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.760635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 [2024-07-25 17:05:43.770143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.770613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.770627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.770635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.770647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.770657] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.770663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.770670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.770680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:23.718 [2024-07-25 17:05:43.780198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.780465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.780479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.780486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.780497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.780508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.780514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.780522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.780532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.718 [2024-07-25 17:05:43.790261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.790743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.790756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.790764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.790775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.790792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.790798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.790806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.790817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 [2024-07-25 17:05:43.800321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.718 [2024-07-25 17:05:43.800569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.718 [2024-07-25 17:05:43.800581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23739d0 with addr=10.0.0.2, port=4420 00:26:23.718 [2024-07-25 17:05:43.800588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23739d0 is same with the state(5) to be set 00:26:23.718 [2024-07-25 17:05:43.800599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23739d0 (9): Bad file descriptor 00:26:23.718 [2024-07-25 17:05:43.800610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.718 [2024-07-25 17:05:43.800616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.718 [2024-07-25 17:05:43.800623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.718 [2024-07-25 17:05:43.800633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.718 [2024-07-25 17:05:43.805843] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:23.718 [2024-07-25 17:05:43.805862] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:23.718 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.719 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:23.980 17:05:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:23.980 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.981 17:05:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.368 [2024-07-25 17:05:45.209441] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:25.368 [2024-07-25 17:05:45.209459] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:25.368 [2024-07-25 17:05:45.209473] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.368 [2024-07-25 17:05:45.339885] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:25.368 [2024-07-25 17:05:45.609604] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:25.368 [2024-07-25 17:05:45.609633] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.368 request: 00:26:25.368 { 00:26:25.368 "name": "nvme", 00:26:25.368 "trtype": "tcp", 00:26:25.368 "traddr": "10.0.0.2", 00:26:25.368 "adrfam": "ipv4", 00:26:25.368 "trsvcid": "8009", 00:26:25.368 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:25.368 "wait_for_attach": true, 00:26:25.368 "method": "bdev_nvme_start_discovery", 00:26:25.368 "req_id": 1 00:26:25.368 } 00:26:25.368 Got JSON-RPC error response 00:26:25.368 response: 00:26:25.368 { 00:26:25.368 "code": -17, 00:26:25.368 "message": "File exists" 00:26:25.368 } 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.368 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:25.629 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 request: 00:26:25.630 { 00:26:25.630 "name": "nvme_second", 00:26:25.630 "trtype": "tcp", 00:26:25.630 "traddr": "10.0.0.2", 00:26:25.630 "adrfam": "ipv4", 00:26:25.630 "trsvcid": "8009", 00:26:25.630 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:25.630 "wait_for_attach": true, 00:26:25.630 "method": "bdev_nvme_start_discovery", 00:26:25.630 "req_id": 1 00:26:25.630 } 00:26:25.630 Got JSON-RPC error response 00:26:25.630 response: 00:26:25.630 { 00:26:25.630 "code": -17, 00:26:25.630 "message": "File exists" 00:26:25.630 } 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 17:05:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.018 [2024-07-25 17:05:46.877185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.018 [2024-07-25 17:05:46.877225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2371720 with addr=10.0.0.2, port=8010 00:26:27.018 [2024-07-25 17:05:46.877239] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:27.018 [2024-07-25 17:05:46.877247] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:27.018 [2024-07-25 17:05:46.877255] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:27.962 [2024-07-25 17:05:47.879791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.962 [2024-07-25 17:05:47.879814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b1f00 with addr=10.0.0.2, port=8010 00:26:27.962 [2024-07-25 17:05:47.879829] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:27.962 [2024-07-25 17:05:47.879837] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:27.962 [2024-07-25 17:05:47.879843] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:28.907 [2024-07-25 17:05:48.881646] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:28.907 request: 00:26:28.907 { 00:26:28.907 "name": "nvme_second", 00:26:28.907 "trtype": "tcp", 00:26:28.907 "traddr": "10.0.0.2", 00:26:28.907 "adrfam": "ipv4", 00:26:28.907 "trsvcid": "8010", 00:26:28.907 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:28.907 "wait_for_attach": false, 00:26:28.907 "attach_timeout_ms": 3000, 00:26:28.907 "method": "bdev_nvme_start_discovery", 00:26:28.907 "req_id": 1 00:26:28.907 } 00:26:28.907 Got JSON-RPC error response 00:26:28.907 response: 00:26:28.907 { 00:26:28.907 "code": -110, 00:26:28.907 "message": "Connection timed out" 00:26:28.907 } 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1552260 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.907 rmmod nvme_tcp 00:26:28.907 rmmod nvme_fabrics 00:26:28.907 rmmod nvme_keyring 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1552076 ']' 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1552076 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1552076 ']' 00:26:28.907 17:05:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1552076 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552076 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552076' 00:26:28.907 killing process with pid 1552076 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1552076 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1552076 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.907 17:05:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.457 00:26:31.457 real 0m20.044s 00:26:31.457 user 0m23.616s 00:26:31.457 sys 0m6.900s 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.457 ************************************ 00:26:31.457 END TEST nvmf_host_discovery 00:26:31.457 ************************************ 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.457 ************************************ 00:26:31.457 START TEST nvmf_host_multipath_status 00:26:31.457 ************************************ 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:31.457 * Looking for test storage... 00:26:31.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:31.457 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:31.458 17:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.602 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:26:39.603 00:26:39.603 --- 10.0.0.2 ping statistics --- 00:26:39.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.603 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:26:39.603 00:26:39.603 --- 10.0.0.1 ping statistics --- 00:26:39.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.603 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1558303 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1558303 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1558303 ']' 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.603 17:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.603 [2024-07-25 17:05:58.772331] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:26:39.603 [2024-07-25 17:05:58.772398] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.603 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.603 [2024-07-25 17:05:58.844238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.603 [2024-07-25 17:05:58.918886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.603 [2024-07-25 17:05:58.918925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.603 [2024-07-25 17:05:58.918934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.603 [2024-07-25 17:05:58.918940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.603 [2024-07-25 17:05:58.918946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.603 [2024-07-25 17:05:58.919090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.603 [2024-07-25 17:05:58.919091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1558303 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:39.603 [2024-07-25 17:05:59.735187] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.603 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:39.864 Malloc0 00:26:39.864 17:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:39.864 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.125 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.125 [2024-07-25 17:06:00.372761] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.125 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:40.386 [2024-07-25 17:06:00.529115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.386 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1558719 00:26:40.386 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1558719 /var/tmp/bdevperf.sock 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1558719 ']' 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:40.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:40.387 17:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.324 17:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:41.324 17:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:41.324 17:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:41.324 17:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:41.893 Nvme0n1 00:26:41.893 17:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:42.153 Nvme0n1 00:26:42.153 17:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:42.153 17:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:44.693 17:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:44.693 17:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:44.693 17:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:44.693 17:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.632 17:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.893 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.893 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.893 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.893 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:46.153 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.153 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:46.154 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.154 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:46.154 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.154 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:46.154 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.154 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.414 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.414 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.414 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.414 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.675 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.675 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:46.675 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.675 17:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.954 17:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:47.921 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:47.921 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.921 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.921 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:48.182 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.182 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:48.182 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.182 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.182 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.182 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.183 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.183 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.444 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.444 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.444 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.444 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.705 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.705 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.705 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.705 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.705 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.705 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.706 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.706 17:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.967 17:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.967 17:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:48.967 17:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.228 17:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.228 17:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:50.172 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:50.172 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:50.172 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.172 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.433 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.433 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:50.433 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.433 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.695 17:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.957 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.957 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.957 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.957 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:51.218 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.480 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:51.480 17:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.867 17:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.127 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.388 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.388 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.388 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.388 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:53.649 17:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:53.909 17:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:54.170 17:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.115 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.376 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.376 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.376 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.376 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.637 17:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.898 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.898 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:55.898 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.898 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.159 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.159 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:56.159 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:56.159 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:56.420 17:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:57.362 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:57.362 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:57.362 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.362 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.623 17:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.884 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.884 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.884 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.884 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.146 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.407 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.407 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:58.669 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:58.669 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:58.669 17:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:58.931 17:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:59.873 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:59.874 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:59.874 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.874 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.134 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.396 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.396 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.396 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.396 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.657 17:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.919 17:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.919 17:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:00.919 17:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:01.180 17:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:01.180 17:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.567 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.828 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.828 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.828 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.828 17:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.828 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.828 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.089 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.089 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.089 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.089 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.089 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.089 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.349 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.349 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:03.349 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:03.609 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:03.609 17:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:04.551 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:04.551 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:04.551 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.551 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.816 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.816 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:04.816 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.816 17:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.124 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.385 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.385 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:05.385 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.385 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:05.645 17:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:05.905 17:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:06.165 17:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:07.107 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:07.107 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:07.107 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.107 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.107 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.107 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:07.368 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.368 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.368 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.368 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.368 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.368 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.628 17:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.887 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.887 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:07.887 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.887 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1558719 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1558719 ']' 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1558719 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558719 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558719' 00:27:08.150 killing process with pid 1558719 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1558719 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1558719 00:27:08.150 Connection closed with partial response: 00:27:08.150 00:27:08.150 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1558719 00:27:08.150 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:08.150 [2024-07-25 17:06:00.608441] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:27:08.150 [2024-07-25 17:06:00.608502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558719 ] 00:27:08.150 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.150 [2024-07-25 17:06:00.657505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.150 [2024-07-25 17:06:00.710692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.150 Running I/O for 90 seconds... 00:27:08.150 [2024-07-25 17:06:14.003412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.003443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.005106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.005128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.005148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.005168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.005187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.150 [2024-07-25 17:06:14.005211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.150 [2024-07-25 17:06:14.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.150 [2024-07-25 17:06:14.005250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.150 [2024-07-25 17:06:14.005269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:08.150 [2024-07-25 17:06:14.005283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.005979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.005994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:08.151 [2024-07-25 17:06:14.006179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.151 [2024-07-25 17:06:14.006184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:14.006384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:14.006389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.164785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.164826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.164988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.164993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.152 [2024-07-25 17:06:26.165522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.152 [2024-07-25 17:06:26.165631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:08.152 [2024-07-25 17:06:26.165641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.153 [2024-07-25 17:06:26.165646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:08.153 [2024-07-25 17:06:26.165756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.153 [2024-07-25 17:06:26.165763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:08.153 Received shutdown signal, test time was about 25.819425 seconds 00:27:08.153 00:27:08.153 Latency(us) 00:27:08.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.153 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:08.153 Verification LBA range: start 0x0 length 0x4000 00:27:08.153 Nvme0n1 : 25.82 11158.64 43.59 0.00 0.00 11451.30 395.95 3019898.88 00:27:08.153 =================================================================================================================== 00:27:08.153 Total : 11158.64 43.59 0.00 0.00 11451.30 395.95 3019898.88 00:27:08.153 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.414 rmmod nvme_tcp 00:27:08.414 rmmod nvme_fabrics 00:27:08.414 rmmod nvme_keyring 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1558303 ']' 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1558303 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1558303 ']' 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1558303 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.414 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558303 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558303' 00:27:08.674 killing process with pid 1558303 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1558303 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1558303 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.674 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.675 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.675 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.675 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.675 17:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.220 00:27:11.220 real 0m39.614s 00:27:11.220 user 1m42.255s 00:27:11.220 sys 0m10.883s 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:11.220 ************************************ 00:27:11.220 END TEST nvmf_host_multipath_status 00:27:11.220 ************************************ 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.220 17:06:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.220 ************************************ 00:27:11.220 START TEST nvmf_discovery_remove_ifc 00:27:11.220 ************************************ 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:11.220 * Looking for test storage... 00:27:11.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.220 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.221 17:06:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.814 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:17.815 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:17.815 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:17.815 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:17.815 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.815 17:06:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.815 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.815 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.815 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.815 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:18.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:27:18.077 00:27:18.077 --- 10.0.0.2 ping statistics --- 00:27:18.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.077 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:27:18.077 00:27:18.077 --- 10.0.0.1 ping statistics --- 00:27:18.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.077 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1569079 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1569079 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1569079 ']' 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:18.077 17:06:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.077 [2024-07-25 17:06:38.300658] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:27:18.077 [2024-07-25 17:06:38.300724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.077 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.339 [2024-07-25 17:06:38.388159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.339 [2024-07-25 17:06:38.480468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.339 [2024-07-25 17:06:38.480528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.339 [2024-07-25 17:06:38.480536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.339 [2024-07-25 17:06:38.480543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.339 [2024-07-25 17:06:38.480555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.339 [2024-07-25 17:06:38.480581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.913 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.913 [2024-07-25 17:06:39.135693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.913 [2024-07-25 17:06:39.143928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:18.913 null0 00:27:18.913 [2024-07-25 17:06:39.175883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1569230 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1569230 /tmp/host.sock 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1569230 ']' 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:19.175 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.175 17:06:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.175 [2024-07-25 17:06:39.253026] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:27:19.175 [2024-07-25 17:06:39.253090] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569230 ] 00:27:19.175 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.175 [2024-07-25 17:06:39.316571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.175 [2024-07-25 17:06:39.391376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.747 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:19.747 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:19.747 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:19.747 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:19.747 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.747 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.009 17:06:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.952 [2024-07-25 17:06:41.145416] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:20.952 [2024-07-25 17:06:41.145440] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:20.952 [2024-07-25 17:06:41.145458] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:21.213 [2024-07-25 17:06:41.234746] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:21.213 [2024-07-25 17:06:41.337713] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:21.213 [2024-07-25 17:06:41.337761] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:21.213 [2024-07-25 17:06:41.337784] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:21.213 [2024-07-25 17:06:41.337798] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:21.213 [2024-07-25 17:06:41.337818] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.213 [2024-07-25 17:06:41.343974] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x152b7f0 was disconnected and freed. delete nvme_qpair. 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.213 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:21.214 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:21.214 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:21.475 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:21.475 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.475 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.475 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.475 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.475 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.476 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.476 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.476 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.476 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:21.476 17:06:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:22.419 17:06:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.363 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.624 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.624 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.624 17:06:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:24.564 17:06:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.506 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.767 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.767 17:06:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.711 [2024-07-25 17:06:46.778193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:26.711 [2024-07-25 17:06:46.778238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.711 [2024-07-25 17:06:46.778249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.711 [2024-07-25 17:06:46.778258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.711 [2024-07-25 17:06:46.778266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.711 [2024-07-25 17:06:46.778274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.711 [2024-07-25 17:06:46.778281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.711 [2024-07-25 17:06:46.778289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.711 [2024-07-25 17:06:46.778296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.711 [2024-07-25 17:06:46.778304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.711 [2024-07-25 17:06:46.778311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.711 [2024-07-25 17:06:46.778318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2060 is same with the state(5) to be set 00:27:26.711 [2024-07-25 17:06:46.788215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2060 (9): Bad file descriptor 00:27:26.711 [2024-07-25 17:06:46.798253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.711 17:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.718 [2024-07-25 17:06:47.825242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:27.718 [2024-07-25 17:06:47.825283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f2060 with addr=10.0.0.2, port=4420 00:27:27.718 [2024-07-25 17:06:47.825297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2060 is same with the state(5) to be set 00:27:27.718 [2024-07-25 17:06:47.825323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2060 (9): Bad file descriptor 00:27:27.718 [2024-07-25 17:06:47.825692] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:27.718 [2024-07-25 17:06:47.825717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:27.718 [2024-07-25 17:06:47.825727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:27.718 [2024-07-25 17:06:47.825735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:27.718 [2024-07-25 17:06:47.825751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:27.718 [2024-07-25 17:06:47.825759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:27.718 17:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.718 17:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.718 17:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.662 [2024-07-25 17:06:48.828136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.662 [2024-07-25 17:06:48.828156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.662 [2024-07-25 17:06:48.828164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.662 [2024-07-25 17:06:48.828171] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:28.662 [2024-07-25 17:06:48.828183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.662 [2024-07-25 17:06:48.828206] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:28.662 [2024-07-25 17:06:48.828228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.662 [2024-07-25 17:06:48.828239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.662 [2024-07-25 17:06:48.828249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.662 [2024-07-25 17:06:48.828257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.662 [2024-07-25 17:06:48.828265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.662 [2024-07-25 17:06:48.828272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.662 [2024-07-25 17:06:48.828280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.662 [2024-07-25 17:06:48.828292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.662 [2024-07-25 17:06:48.828300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.662 [2024-07-25 17:06:48.828307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.662 [2024-07-25 17:06:48.828314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:28.662 [2024-07-25 17:06:48.828852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f14c0 (9): Bad file descriptor 00:27:28.662 [2024-07-25 17:06:48.829863] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:28.662 [2024-07-25 17:06:48.829874] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.662 17:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:28.924 17:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:29.868 17:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.812 [2024-07-25 17:06:50.890320] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.812 [2024-07-25 17:06:50.890342] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.812 [2024-07-25 17:06:50.890356] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.812 [2024-07-25 17:06:50.977629] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:30.812 [2024-07-25 17:06:51.080738] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:30.812 [2024-07-25 17:06:51.080775] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:30.812 [2024-07-25 17:06:51.080795] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:30.812 [2024-07-25 17:06:51.080809] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:30.812 [2024-07-25 17:06:51.080817] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:31.074 [2024-07-25 17:06:51.088374] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14f8e50 was disconnected and freed. delete nvme_qpair. 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1569230 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1569230 ']' 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1569230 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569230 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569230' 00:27:31.074 killing process with pid 1569230 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1569230 00:27:31.074 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1569230 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.336 rmmod nvme_tcp 00:27:31.336 rmmod nvme_fabrics 00:27:31.336 rmmod nvme_keyring 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1569079 ']' 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1569079 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1569079 ']' 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1569079 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569079 00:27:31.336 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569079' 00:27:31.337 killing process with pid 1569079 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1569079 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1569079 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.337 17:06:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.884 00:27:33.884 real 0m22.667s 00:27:33.884 user 0m26.928s 00:27:33.884 sys 0m6.572s 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.884 ************************************ 00:27:33.884 END TEST nvmf_discovery_remove_ifc 00:27:33.884 ************************************ 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.884 ************************************ 00:27:33.884 START TEST nvmf_identify_kernel_target 00:27:33.884 ************************************ 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:33.884 * Looking for test storage... 00:27:33.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.884 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.885 17:06:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:42.031 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:42.031 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:42.031 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:42.031 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.031 17:07:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.031 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.031 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.031 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:27:42.031 00:27:42.031 --- 10.0.0.2 ping statistics --- 00:27:42.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.031 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:27:42.031 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:27:42.031 00:27:42.031 --- 10.0.0.1 ping statistics --- 00:27:42.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.032 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:42.032 17:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:44.581 Waiting for block devices as requested 00:27:44.581 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:44.581 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:44.581 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:44.581 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:44.581 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:44.581 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:44.842 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:44.842 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:44.842 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:45.103 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:45.103 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:45.364 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:45.364 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:45.364 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:45.364 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:45.626 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:45.626 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:45.887 No valid GPT data, bailing 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:45.887 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:46.151 00:27:46.151 Discovery Log Number of Records 2, Generation counter 2 00:27:46.151 =====Discovery Log Entry 0====== 00:27:46.151 trtype: tcp 00:27:46.151 adrfam: ipv4 00:27:46.151 subtype: current discovery subsystem 00:27:46.151 treq: not specified, sq flow control disable supported 00:27:46.151 portid: 1 00:27:46.151 trsvcid: 4420 00:27:46.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:46.151 traddr: 10.0.0.1 00:27:46.151 eflags: none 00:27:46.151 sectype: none 00:27:46.151 =====Discovery Log Entry 1====== 00:27:46.151 trtype: tcp 00:27:46.151 adrfam: ipv4 00:27:46.151 subtype: nvme subsystem 00:27:46.151 treq: not specified, sq flow control disable supported 00:27:46.151 portid: 1 00:27:46.151 trsvcid: 4420 00:27:46.151 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:46.151 traddr: 10.0.0.1 00:27:46.151 eflags: none 00:27:46.151 sectype: none 00:27:46.151 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:46.151 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:46.151 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.151 ===================================================== 00:27:46.151 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:46.151 ===================================================== 00:27:46.151 Controller Capabilities/Features 00:27:46.151 ================================ 00:27:46.151 Vendor ID: 0000 00:27:46.151 Subsystem Vendor ID: 0000 00:27:46.151 Serial Number: 9bfe57e3a0a29edde25e 00:27:46.151 Model Number: Linux 00:27:46.151 Firmware Version: 6.7.0-68 00:27:46.151 Recommended Arb Burst: 0 00:27:46.151 IEEE OUI Identifier: 00 00 00 00:27:46.151 Multi-path I/O 00:27:46.151 May have multiple subsystem ports: No 00:27:46.151 May have multiple controllers: No 00:27:46.151 Associated with SR-IOV VF: No 00:27:46.151 Max Data Transfer Size: Unlimited 00:27:46.151 Max Number of Namespaces: 0 00:27:46.151 Max Number of I/O Queues: 1024 00:27:46.151 NVMe Specification Version (VS): 1.3 00:27:46.151 NVMe Specification Version (Identify): 1.3 00:27:46.151 Maximum Queue Entries: 1024 00:27:46.151 Contiguous Queues Required: No 00:27:46.151 Arbitration Mechanisms Supported 00:27:46.151 Weighted Round Robin: Not Supported 00:27:46.151 Vendor Specific: Not Supported 00:27:46.151 Reset Timeout: 7500 ms 00:27:46.151 Doorbell Stride: 4 bytes 00:27:46.151 NVM Subsystem Reset: Not Supported 00:27:46.151 Command Sets Supported 00:27:46.151 NVM Command Set: Supported 00:27:46.151 Boot Partition: Not Supported 00:27:46.151 Memory Page Size Minimum: 4096 bytes 00:27:46.151 Memory Page Size Maximum: 4096 bytes 00:27:46.151 Persistent Memory Region: Not Supported 00:27:46.151 Optional Asynchronous Events Supported 00:27:46.151 Namespace Attribute Notices: Not Supported 00:27:46.151 Firmware Activation Notices: Not Supported 00:27:46.151 ANA Change Notices: Not Supported 00:27:46.151 PLE Aggregate Log Change Notices: Not Supported 00:27:46.151 LBA Status Info Alert Notices: Not Supported 00:27:46.151 EGE Aggregate Log Change Notices: Not Supported 00:27:46.151 Normal NVM Subsystem Shutdown event: Not Supported 00:27:46.151 Zone Descriptor Change Notices: Not Supported 00:27:46.151 Discovery Log Change Notices: Supported 00:27:46.151 Controller Attributes 00:27:46.151 128-bit Host Identifier: Not Supported 00:27:46.151 Non-Operational Permissive Mode: Not Supported 00:27:46.151 NVM Sets: Not Supported 00:27:46.151 Read Recovery Levels: Not Supported 00:27:46.151 Endurance Groups: Not Supported 00:27:46.151 Predictable Latency Mode: Not Supported 00:27:46.151 Traffic Based Keep ALive: Not Supported 00:27:46.151 Namespace Granularity: Not Supported 00:27:46.151 SQ Associations: Not Supported 00:27:46.151 UUID List: Not Supported 00:27:46.151 Multi-Domain Subsystem: Not Supported 00:27:46.151 Fixed Capacity Management: Not Supported 00:27:46.151 Variable Capacity Management: Not Supported 00:27:46.151 Delete Endurance Group: Not Supported 00:27:46.151 Delete NVM Set: Not Supported 00:27:46.151 Extended LBA Formats Supported: Not Supported 00:27:46.151 Flexible Data Placement Supported: Not Supported 00:27:46.151 00:27:46.151 Controller Memory Buffer Support 00:27:46.151 ================================ 00:27:46.151 Supported: No 00:27:46.151 00:27:46.151 Persistent Memory Region Support 00:27:46.151 ================================ 00:27:46.151 Supported: No 00:27:46.151 00:27:46.151 Admin Command Set Attributes 00:27:46.151 ============================ 00:27:46.151 Security Send/Receive: Not Supported 00:27:46.151 Format NVM: Not Supported 00:27:46.151 Firmware Activate/Download: Not Supported 00:27:46.151 Namespace Management: Not Supported 00:27:46.151 Device Self-Test: Not Supported 00:27:46.151 Directives: Not Supported 00:27:46.151 NVMe-MI: Not Supported 00:27:46.151 Virtualization Management: Not Supported 00:27:46.151 Doorbell Buffer Config: Not Supported 00:27:46.151 Get LBA Status Capability: Not Supported 00:27:46.151 Command & Feature Lockdown Capability: Not Supported 00:27:46.151 Abort Command Limit: 1 00:27:46.151 Async Event Request Limit: 1 00:27:46.151 Number of Firmware Slots: N/A 00:27:46.151 Firmware Slot 1 Read-Only: N/A 00:27:46.151 Firmware Activation Without Reset: N/A 00:27:46.151 Multiple Update Detection Support: N/A 00:27:46.151 Firmware Update Granularity: No Information Provided 00:27:46.151 Per-Namespace SMART Log: No 00:27:46.151 Asymmetric Namespace Access Log Page: Not Supported 00:27:46.151 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:46.151 Command Effects Log Page: Not Supported 00:27:46.151 Get Log Page Extended Data: Supported 00:27:46.151 Telemetry Log Pages: Not Supported 00:27:46.151 Persistent Event Log Pages: Not Supported 00:27:46.151 Supported Log Pages Log Page: May Support 00:27:46.151 Commands Supported & Effects Log Page: Not Supported 00:27:46.151 Feature Identifiers & Effects Log Page:May Support 00:27:46.151 NVMe-MI Commands & Effects Log Page: May Support 00:27:46.151 Data Area 4 for Telemetry Log: Not Supported 00:27:46.151 Error Log Page Entries Supported: 1 00:27:46.151 Keep Alive: Not Supported 00:27:46.151 00:27:46.151 NVM Command Set Attributes 00:27:46.151 ========================== 00:27:46.151 Submission Queue Entry Size 00:27:46.151 Max: 1 00:27:46.151 Min: 1 00:27:46.151 Completion Queue Entry Size 00:27:46.151 Max: 1 00:27:46.151 Min: 1 00:27:46.151 Number of Namespaces: 0 00:27:46.151 Compare Command: Not Supported 00:27:46.151 Write Uncorrectable Command: Not Supported 00:27:46.151 Dataset Management Command: Not Supported 00:27:46.151 Write Zeroes Command: Not Supported 00:27:46.151 Set Features Save Field: Not Supported 00:27:46.151 Reservations: Not Supported 00:27:46.151 Timestamp: Not Supported 00:27:46.151 Copy: Not Supported 00:27:46.151 Volatile Write Cache: Not Present 00:27:46.151 Atomic Write Unit (Normal): 1 00:27:46.151 Atomic Write Unit (PFail): 1 00:27:46.151 Atomic Compare & Write Unit: 1 00:27:46.151 Fused Compare & Write: Not Supported 00:27:46.151 Scatter-Gather List 00:27:46.151 SGL Command Set: Supported 00:27:46.151 SGL Keyed: Not Supported 00:27:46.151 SGL Bit Bucket Descriptor: Not Supported 00:27:46.151 SGL Metadata Pointer: Not Supported 00:27:46.151 Oversized SGL: Not Supported 00:27:46.151 SGL Metadata Address: Not Supported 00:27:46.151 SGL Offset: Supported 00:27:46.151 Transport SGL Data Block: Not Supported 00:27:46.151 Replay Protected Memory Block: Not Supported 00:27:46.151 00:27:46.151 Firmware Slot Information 00:27:46.151 ========================= 00:27:46.151 Active slot: 0 00:27:46.151 00:27:46.151 00:27:46.151 Error Log 00:27:46.151 ========= 00:27:46.151 00:27:46.151 Active Namespaces 00:27:46.152 ================= 00:27:46.152 Discovery Log Page 00:27:46.152 ================== 00:27:46.152 Generation Counter: 2 00:27:46.152 Number of Records: 2 00:27:46.152 Record Format: 0 00:27:46.152 00:27:46.152 Discovery Log Entry 0 00:27:46.152 ---------------------- 00:27:46.152 Transport Type: 3 (TCP) 00:27:46.152 Address Family: 1 (IPv4) 00:27:46.152 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:46.152 Entry Flags: 00:27:46.152 Duplicate Returned Information: 0 00:27:46.152 Explicit Persistent Connection Support for Discovery: 0 00:27:46.152 Transport Requirements: 00:27:46.152 Secure Channel: Not Specified 00:27:46.152 Port ID: 1 (0x0001) 00:27:46.152 Controller ID: 65535 (0xffff) 00:27:46.152 Admin Max SQ Size: 32 00:27:46.152 Transport Service Identifier: 4420 00:27:46.152 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:46.152 Transport Address: 10.0.0.1 00:27:46.152 Discovery Log Entry 1 00:27:46.152 ---------------------- 00:27:46.152 Transport Type: 3 (TCP) 00:27:46.152 Address Family: 1 (IPv4) 00:27:46.152 Subsystem Type: 2 (NVM Subsystem) 00:27:46.152 Entry Flags: 00:27:46.152 Duplicate Returned Information: 0 00:27:46.152 Explicit Persistent Connection Support for Discovery: 0 00:27:46.152 Transport Requirements: 00:27:46.152 Secure Channel: Not Specified 00:27:46.152 Port ID: 1 (0x0001) 00:27:46.152 Controller ID: 65535 (0xffff) 00:27:46.152 Admin Max SQ Size: 32 00:27:46.152 Transport Service Identifier: 4420 00:27:46.152 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:46.152 Transport Address: 10.0.0.1 00:27:46.152 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:46.152 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.152 get_feature(0x01) failed 00:27:46.152 get_feature(0x02) failed 00:27:46.152 get_feature(0x04) failed 00:27:46.152 ===================================================== 00:27:46.152 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:46.152 ===================================================== 00:27:46.152 Controller Capabilities/Features 00:27:46.152 ================================ 00:27:46.152 Vendor ID: 0000 00:27:46.152 Subsystem Vendor ID: 0000 00:27:46.152 Serial Number: 953470e174c911b5dac6 00:27:46.152 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:46.152 Firmware Version: 6.7.0-68 00:27:46.152 Recommended Arb Burst: 6 00:27:46.152 IEEE OUI Identifier: 00 00 00 00:27:46.152 Multi-path I/O 00:27:46.152 May have multiple subsystem ports: Yes 00:27:46.152 May have multiple controllers: Yes 00:27:46.152 Associated with SR-IOV VF: No 00:27:46.152 Max Data Transfer Size: Unlimited 00:27:46.152 Max Number of Namespaces: 1024 00:27:46.152 Max Number of I/O Queues: 128 00:27:46.152 NVMe Specification Version (VS): 1.3 00:27:46.152 NVMe Specification Version (Identify): 1.3 00:27:46.152 Maximum Queue Entries: 1024 00:27:46.152 Contiguous Queues Required: No 00:27:46.152 Arbitration Mechanisms Supported 00:27:46.152 Weighted Round Robin: Not Supported 00:27:46.152 Vendor Specific: Not Supported 00:27:46.152 Reset Timeout: 7500 ms 00:27:46.152 Doorbell Stride: 4 bytes 00:27:46.152 NVM Subsystem Reset: Not Supported 00:27:46.152 Command Sets Supported 00:27:46.152 NVM Command Set: Supported 00:27:46.152 Boot Partition: Not Supported 00:27:46.152 Memory Page Size Minimum: 4096 bytes 00:27:46.152 Memory Page Size Maximum: 4096 bytes 00:27:46.152 Persistent Memory Region: Not Supported 00:27:46.152 Optional Asynchronous Events Supported 00:27:46.152 Namespace Attribute Notices: Supported 00:27:46.152 Firmware Activation Notices: Not Supported 00:27:46.152 ANA Change Notices: Supported 00:27:46.152 PLE Aggregate Log Change Notices: Not Supported 00:27:46.152 LBA Status Info Alert Notices: Not Supported 00:27:46.152 EGE Aggregate Log Change Notices: Not Supported 00:27:46.152 Normal NVM Subsystem Shutdown event: Not Supported 00:27:46.152 Zone Descriptor Change Notices: Not Supported 00:27:46.152 Discovery Log Change Notices: Not Supported 00:27:46.152 Controller Attributes 00:27:46.152 128-bit Host Identifier: Supported 00:27:46.152 Non-Operational Permissive Mode: Not Supported 00:27:46.152 NVM Sets: Not Supported 00:27:46.152 Read Recovery Levels: Not Supported 00:27:46.152 Endurance Groups: Not Supported 00:27:46.152 Predictable Latency Mode: Not Supported 00:27:46.152 Traffic Based Keep ALive: Supported 00:27:46.152 Namespace Granularity: Not Supported 00:27:46.152 SQ Associations: Not Supported 00:27:46.152 UUID List: Not Supported 00:27:46.152 Multi-Domain Subsystem: Not Supported 00:27:46.152 Fixed Capacity Management: Not Supported 00:27:46.152 Variable Capacity Management: Not Supported 00:27:46.152 Delete Endurance Group: Not Supported 00:27:46.152 Delete NVM Set: Not Supported 00:27:46.152 Extended LBA Formats Supported: Not Supported 00:27:46.152 Flexible Data Placement Supported: Not Supported 00:27:46.152 00:27:46.152 Controller Memory Buffer Support 00:27:46.152 ================================ 00:27:46.152 Supported: No 00:27:46.152 00:27:46.152 Persistent Memory Region Support 00:27:46.152 ================================ 00:27:46.152 Supported: No 00:27:46.152 00:27:46.152 Admin Command Set Attributes 00:27:46.152 ============================ 00:27:46.152 Security Send/Receive: Not Supported 00:27:46.152 Format NVM: Not Supported 00:27:46.152 Firmware Activate/Download: Not Supported 00:27:46.152 Namespace Management: Not Supported 00:27:46.152 Device Self-Test: Not Supported 00:27:46.152 Directives: Not Supported 00:27:46.152 NVMe-MI: Not Supported 00:27:46.152 Virtualization Management: Not Supported 00:27:46.152 Doorbell Buffer Config: Not Supported 00:27:46.153 Get LBA Status Capability: Not Supported 00:27:46.153 Command & Feature Lockdown Capability: Not Supported 00:27:46.153 Abort Command Limit: 4 00:27:46.153 Async Event Request Limit: 4 00:27:46.153 Number of Firmware Slots: N/A 00:27:46.153 Firmware Slot 1 Read-Only: N/A 00:27:46.153 Firmware Activation Without Reset: N/A 00:27:46.153 Multiple Update Detection Support: N/A 00:27:46.153 Firmware Update Granularity: No Information Provided 00:27:46.153 Per-Namespace SMART Log: Yes 00:27:46.153 Asymmetric Namespace Access Log Page: Supported 00:27:46.153 ANA Transition Time : 10 sec 00:27:46.153 00:27:46.153 Asymmetric Namespace Access Capabilities 00:27:46.153 ANA Optimized State : Supported 00:27:46.153 ANA Non-Optimized State : Supported 00:27:46.153 ANA Inaccessible State : Supported 00:27:46.153 ANA Persistent Loss State : Supported 00:27:46.153 ANA Change State : Supported 00:27:46.153 ANAGRPID is not changed : No 00:27:46.153 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:46.153 00:27:46.153 ANA Group Identifier Maximum : 128 00:27:46.153 Number of ANA Group Identifiers : 128 00:27:46.153 Max Number of Allowed Namespaces : 1024 00:27:46.153 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:46.153 Command Effects Log Page: Supported 00:27:46.153 Get Log Page Extended Data: Supported 00:27:46.153 Telemetry Log Pages: Not Supported 00:27:46.153 Persistent Event Log Pages: Not Supported 00:27:46.153 Supported Log Pages Log Page: May Support 00:27:46.153 Commands Supported & Effects Log Page: Not Supported 00:27:46.153 Feature Identifiers & Effects Log Page:May Support 00:27:46.153 NVMe-MI Commands & Effects Log Page: May Support 00:27:46.153 Data Area 4 for Telemetry Log: Not Supported 00:27:46.153 Error Log Page Entries Supported: 128 00:27:46.153 Keep Alive: Supported 00:27:46.153 Keep Alive Granularity: 1000 ms 00:27:46.153 00:27:46.153 NVM Command Set Attributes 00:27:46.153 ========================== 00:27:46.153 Submission Queue Entry Size 00:27:46.153 Max: 64 00:27:46.153 Min: 64 00:27:46.153 Completion Queue Entry Size 00:27:46.153 Max: 16 00:27:46.153 Min: 16 00:27:46.153 Number of Namespaces: 1024 00:27:46.153 Compare Command: Not Supported 00:27:46.153 Write Uncorrectable Command: Not Supported 00:27:46.153 Dataset Management Command: Supported 00:27:46.153 Write Zeroes Command: Supported 00:27:46.153 Set Features Save Field: Not Supported 00:27:46.153 Reservations: Not Supported 00:27:46.153 Timestamp: Not Supported 00:27:46.153 Copy: Not Supported 00:27:46.153 Volatile Write Cache: Present 00:27:46.153 Atomic Write Unit (Normal): 1 00:27:46.153 Atomic Write Unit (PFail): 1 00:27:46.153 Atomic Compare & Write Unit: 1 00:27:46.153 Fused Compare & Write: Not Supported 00:27:46.153 Scatter-Gather List 00:27:46.153 SGL Command Set: Supported 00:27:46.153 SGL Keyed: Not Supported 00:27:46.153 SGL Bit Bucket Descriptor: Not Supported 00:27:46.153 SGL Metadata Pointer: Not Supported 00:27:46.153 Oversized SGL: Not Supported 00:27:46.153 SGL Metadata Address: Not Supported 00:27:46.153 SGL Offset: Supported 00:27:46.153 Transport SGL Data Block: Not Supported 00:27:46.153 Replay Protected Memory Block: Not Supported 00:27:46.153 00:27:46.153 Firmware Slot Information 00:27:46.153 ========================= 00:27:46.153 Active slot: 0 00:27:46.153 00:27:46.153 Asymmetric Namespace Access 00:27:46.153 =========================== 00:27:46.153 Change Count : 0 00:27:46.153 Number of ANA Group Descriptors : 1 00:27:46.153 ANA Group Descriptor : 0 00:27:46.153 ANA Group ID : 1 00:27:46.153 Number of NSID Values : 1 00:27:46.153 Change Count : 0 00:27:46.153 ANA State : 1 00:27:46.153 Namespace Identifier : 1 00:27:46.153 00:27:46.153 Commands Supported and Effects 00:27:46.153 ============================== 00:27:46.153 Admin Commands 00:27:46.153 -------------- 00:27:46.153 Get Log Page (02h): Supported 00:27:46.153 Identify (06h): Supported 00:27:46.153 Abort (08h): Supported 00:27:46.153 Set Features (09h): Supported 00:27:46.153 Get Features (0Ah): Supported 00:27:46.153 Asynchronous Event Request (0Ch): Supported 00:27:46.153 Keep Alive (18h): Supported 00:27:46.153 I/O Commands 00:27:46.153 ------------ 00:27:46.153 Flush (00h): Supported 00:27:46.153 Write (01h): Supported LBA-Change 00:27:46.153 Read (02h): Supported 00:27:46.153 Write Zeroes (08h): Supported LBA-Change 00:27:46.153 Dataset Management (09h): Supported 00:27:46.153 00:27:46.153 Error Log 00:27:46.153 ========= 00:27:46.153 Entry: 0 00:27:46.153 Error Count: 0x3 00:27:46.153 Submission Queue Id: 0x0 00:27:46.153 Command Id: 0x5 00:27:46.153 Phase Bit: 0 00:27:46.153 Status Code: 0x2 00:27:46.153 Status Code Type: 0x0 00:27:46.153 Do Not Retry: 1 00:27:46.153 Error Location: 0x28 00:27:46.153 LBA: 0x0 00:27:46.153 Namespace: 0x0 00:27:46.153 Vendor Log Page: 0x0 00:27:46.153 ----------- 00:27:46.153 Entry: 1 00:27:46.153 Error Count: 0x2 00:27:46.153 Submission Queue Id: 0x0 00:27:46.153 Command Id: 0x5 00:27:46.153 Phase Bit: 0 00:27:46.153 Status Code: 0x2 00:27:46.153 Status Code Type: 0x0 00:27:46.153 Do Not Retry: 1 00:27:46.153 Error Location: 0x28 00:27:46.153 LBA: 0x0 00:27:46.153 Namespace: 0x0 00:27:46.153 Vendor Log Page: 0x0 00:27:46.153 ----------- 00:27:46.153 Entry: 2 00:27:46.153 Error Count: 0x1 00:27:46.153 Submission Queue Id: 0x0 00:27:46.153 Command Id: 0x4 00:27:46.153 Phase Bit: 0 00:27:46.153 Status Code: 0x2 00:27:46.153 Status Code Type: 0x0 00:27:46.153 Do Not Retry: 1 00:27:46.153 Error Location: 0x28 00:27:46.153 LBA: 0x0 00:27:46.153 Namespace: 0x0 00:27:46.153 Vendor Log Page: 0x0 00:27:46.153 00:27:46.153 Number of Queues 00:27:46.153 ================ 00:27:46.153 Number of I/O Submission Queues: 128 00:27:46.153 Number of I/O Completion Queues: 128 00:27:46.153 00:27:46.153 ZNS Specific Controller Data 00:27:46.154 ============================ 00:27:46.154 Zone Append Size Limit: 0 00:27:46.154 00:27:46.154 00:27:46.154 Active Namespaces 00:27:46.154 ================= 00:27:46.154 get_feature(0x05) failed 00:27:46.154 Namespace ID:1 00:27:46.154 Command Set Identifier: NVM (00h) 00:27:46.154 Deallocate: Supported 00:27:46.154 Deallocated/Unwritten Error: Not Supported 00:27:46.154 Deallocated Read Value: Unknown 00:27:46.154 Deallocate in Write Zeroes: Not Supported 00:27:46.154 Deallocated Guard Field: 0xFFFF 00:27:46.154 Flush: Supported 00:27:46.154 Reservation: Not Supported 00:27:46.154 Namespace Sharing Capabilities: Multiple Controllers 00:27:46.154 Size (in LBAs): 3750748848 (1788GiB) 00:27:46.154 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:46.154 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:46.154 UUID: 15e05933-e017-425a-aedb-db164a5ae0bb 00:27:46.154 Thin Provisioning: Not Supported 00:27:46.154 Per-NS Atomic Units: Yes 00:27:46.154 Atomic Write Unit (Normal): 8 00:27:46.154 Atomic Write Unit (PFail): 8 00:27:46.154 Preferred Write Granularity: 8 00:27:46.154 Atomic Compare & Write Unit: 8 00:27:46.154 Atomic Boundary Size (Normal): 0 00:27:46.154 Atomic Boundary Size (PFail): 0 00:27:46.154 Atomic Boundary Offset: 0 00:27:46.154 NGUID/EUI64 Never Reused: No 00:27:46.154 ANA group ID: 1 00:27:46.154 Namespace Write Protected: No 00:27:46.154 Number of LBA Formats: 1 00:27:46.154 Current LBA Format: LBA Format #00 00:27:46.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:46.154 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.154 rmmod nvme_tcp 00:27:46.154 rmmod nvme_fabrics 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.154 17:07:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:48.703 17:07:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:51.254 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:51.254 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:51.515 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:51.515 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:51.777 00:27:51.777 real 0m18.134s 00:27:51.777 user 0m4.647s 00:27:51.777 sys 0m10.328s 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.777 ************************************ 00:27:51.777 END TEST nvmf_identify_kernel_target 00:27:51.777 ************************************ 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.777 ************************************ 00:27:51.777 START TEST nvmf_auth_host 00:27:51.777 ************************************ 00:27:51.777 17:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:51.777 * Looking for test storage... 00:27:52.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.039 17:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:58.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.685 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:58.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:58.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:58.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.686 17:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.946 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.946 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.946 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.946 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.946 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.946 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:27:59.207 00:27:59.207 --- 10.0.0.2 ping statistics --- 00:27:59.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.207 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:27:59.207 00:27:59.207 --- 10.0.0.1 ping statistics --- 00:27:59.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.207 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1583253 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1583253 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1583253 ']' 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.207 17:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0515c9c0ec95254ad8acc16cb6c15fa2 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.P2t 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0515c9c0ec95254ad8acc16cb6c15fa2 0 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0515c9c0ec95254ad8acc16cb6c15fa2 0 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0515c9c0ec95254ad8acc16cb6c15fa2 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.P2t 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.P2t 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.P2t 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=563cb18e05c38926381ee692c0a28dff6512ec9f0e9edcdb757eb051f34d43b6 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pxv 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 563cb18e05c38926381ee692c0a28dff6512ec9f0e9edcdb757eb051f34d43b6 3 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 563cb18e05c38926381ee692c0a28dff6512ec9f0e9edcdb757eb051f34d43b6 3 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=563cb18e05c38926381ee692c0a28dff6512ec9f0e9edcdb757eb051f34d43b6 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pxv 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pxv 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Pxv 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ff59503a3227806a7c8e83af66b0ec745822366e37355ae5 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vMn 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ff59503a3227806a7c8e83af66b0ec745822366e37355ae5 0 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ff59503a3227806a7c8e83af66b0ec745822366e37355ae5 0 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ff59503a3227806a7c8e83af66b0ec745822366e37355ae5 00:28:00.150 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vMn 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vMn 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vMn 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bfcded7e7fd560c4af13e17dd8daa303a4cab6203ae6785c 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IQz 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bfcded7e7fd560c4af13e17dd8daa303a4cab6203ae6785c 2 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bfcded7e7fd560c4af13e17dd8daa303a4cab6203ae6785c 2 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bfcded7e7fd560c4af13e17dd8daa303a4cab6203ae6785c 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IQz 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IQz 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IQz 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fec0cdfd4b2db94d91e35dba2d3216d7 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cGA 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fec0cdfd4b2db94d91e35dba2d3216d7 1 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fec0cdfd4b2db94d91e35dba2d3216d7 1 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fec0cdfd4b2db94d91e35dba2d3216d7 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:00.151 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cGA 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cGA 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cGA 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe7e1513b4722c16ccd478a639938ba4 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Xle 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe7e1513b4722c16ccd478a639938ba4 1 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe7e1513b4722c16ccd478a639938ba4 1 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe7e1513b4722c16ccd478a639938ba4 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Xle 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Xle 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Xle 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed5c9409b958198ff3f140e396bf9ad1c500e0f22b0a9d27 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.T6g 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed5c9409b958198ff3f140e396bf9ad1c500e0f22b0a9d27 2 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed5c9409b958198ff3f140e396bf9ad1c500e0f22b0a9d27 2 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed5c9409b958198ff3f140e396bf9ad1c500e0f22b0a9d27 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.T6g 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.T6g 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.T6g 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e9b33adf3f3bae5b4632e66148e28234 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yaq 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e9b33adf3f3bae5b4632e66148e28234 0 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e9b33adf3f3bae5b4632e66148e28234 0 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e9b33adf3f3bae5b4632e66148e28234 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yaq 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yaq 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yaq 00:28:00.413 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4ad87daf4e1e751fc3a190a74e2af5609c4f2b8a9a2efcfe199614725298f209 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EC4 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4ad87daf4e1e751fc3a190a74e2af5609c4f2b8a9a2efcfe199614725298f209 3 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4ad87daf4e1e751fc3a190a74e2af5609c4f2b8a9a2efcfe199614725298f209 3 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4ad87daf4e1e751fc3a190a74e2af5609c4f2b8a9a2efcfe199614725298f209 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:00.414 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EC4 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EC4 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.EC4 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1583253 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1583253 ']' 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.P2t 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Pxv ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pxv 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vMn 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IQz ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IQz 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cGA 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Xle ]] 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xle 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.675 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.T6g 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yaq ]] 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yaq 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.EC4 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.936 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:00.937 17:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:00.937 17:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:00.937 17:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:04.232 Waiting for block devices as requested 00:28:04.232 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:04.232 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:04.492 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:04.492 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:04.753 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:04.753 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:04.753 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:04.753 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:05.013 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:05.013 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:05.013 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:05.955 No valid GPT data, bailing 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:05.955 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:06.216 00:28:06.216 Discovery Log Number of Records 2, Generation counter 2 00:28:06.216 =====Discovery Log Entry 0====== 00:28:06.216 trtype: tcp 00:28:06.216 adrfam: ipv4 00:28:06.216 subtype: current discovery subsystem 00:28:06.216 treq: not specified, sq flow control disable supported 00:28:06.216 portid: 1 00:28:06.216 trsvcid: 4420 00:28:06.216 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:06.216 traddr: 10.0.0.1 00:28:06.216 eflags: none 00:28:06.216 sectype: none 00:28:06.216 =====Discovery Log Entry 1====== 00:28:06.216 trtype: tcp 00:28:06.216 adrfam: ipv4 00:28:06.216 subtype: nvme subsystem 00:28:06.216 treq: not specified, sq flow control disable supported 00:28:06.216 portid: 1 00:28:06.216 trsvcid: 4420 00:28:06.216 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:06.216 traddr: 10.0.0.1 00:28:06.216 eflags: none 00:28:06.216 sectype: none 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.216 nvme0n1 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.216 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 nvme0n1 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.738 nvme0n1 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:06.738 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.739 17:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.739 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.000 nvme0n1 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.000 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 nvme0n1 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.268 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.530 nvme0n1 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.530 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.531 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.792 nvme0n1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.792 17:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 nvme0n1 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:08.053 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.054 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.314 nvme0n1 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:08.314 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.315 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.575 nvme0n1 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.575 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.576 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.836 nvme0n1 00:28:08.836 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.836 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.836 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.836 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.836 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.836 17:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.836 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.096 nvme0n1 00:28:09.096 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.096 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.096 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.096 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.096 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.096 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.356 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.357 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.618 nvme0n1 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.618 17:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.880 nvme0n1 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.880 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.141 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.141 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.141 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.403 nvme0n1 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.403 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.665 nvme0n1 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.665 17:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.238 nvme0n1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.238 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.810 nvme0n1 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.810 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.811 17:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.383 nvme0n1 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.383 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.644 nvme0n1 00:28:12.644 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.644 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.644 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.644 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.644 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.644 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.905 17:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.167 nvme0n1 00:28:13.167 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.428 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.429 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.429 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.429 17:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.373 nvme0n1 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:14.373 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.374 17:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.957 nvme0n1 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.957 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.958 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.927 nvme0n1 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.927 17:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.927 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.868 nvme0n1 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.868 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.869 17:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.441 nvme0n1 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.441 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.442 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.703 nvme0n1 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.703 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.704 17:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.966 nvme0n1 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.966 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.227 nvme0n1 00:28:18.227 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.227 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.228 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.488 nvme0n1 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.488 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.489 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.749 nvme0n1 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.749 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.750 17:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.010 nvme0n1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.010 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.271 nvme0n1 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.271 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.272 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.532 nvme0n1 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.532 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.794 nvme0n1 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.794 17:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 nvme0n1 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.055 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.056 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.318 nvme0n1 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.318 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.578 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.579 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 nvme0n1 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.839 17:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.100 nvme0n1 00:28:21.100 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.100 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.100 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.100 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.100 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.100 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.101 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.361 nvme0n1 00:28:21.361 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.361 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.361 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.361 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.361 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.361 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.621 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.622 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.882 nvme0n1 00:28:21.882 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.882 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.882 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.882 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.882 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.882 17:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.882 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.453 nvme0n1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.453 17:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.025 nvme0n1 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.026 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.609 nvme0n1 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.609 17:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.180 nvme0n1 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.180 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.751 nvme0n1 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.751 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.752 17:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.323 nvme0n1 00:28:25.323 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.323 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.323 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.323 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.323 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.323 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.585 17:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.155 nvme0n1 00:28:26.155 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.155 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.155 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.155 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.155 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.155 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.415 17:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.984 nvme0n1 00:28:26.984 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.984 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.984 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.984 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.984 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.984 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.244 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.245 17:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.816 nvme0n1 00:28:27.816 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.816 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.816 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.816 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.816 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.816 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.076 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.076 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.077 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.647 nvme0n1 00:28:28.647 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.647 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.647 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.647 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.647 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.906 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.907 17:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.907 nvme0n1 00:28:28.907 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.907 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.907 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.907 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.907 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.907 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.167 nvme0n1 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.167 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.427 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.427 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.427 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:29.427 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.428 nvme0n1 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.428 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.689 nvme0n1 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.689 17:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.950 nvme0n1 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.950 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.951 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.212 nvme0n1 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.212 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.213 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.213 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.474 nvme0n1 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.474 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.736 nvme0n1 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.736 17:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.996 nvme0n1 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:30.996 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.997 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.256 nvme0n1 00:28:31.256 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.256 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.256 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.256 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.256 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.256 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.257 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.517 nvme0n1 00:28:31.517 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.517 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.517 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.517 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.517 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.517 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.819 17:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.081 nvme0n1 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.081 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.342 nvme0n1 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.342 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.343 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.604 nvme0n1 00:28:32.604 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.604 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.604 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.604 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.604 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.604 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.865 17:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.126 nvme0n1 00:28:33.126 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.126 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.126 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.126 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.126 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.127 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.698 nvme0n1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.698 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.699 17:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.269 nvme0n1 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.269 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.270 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.841 nvme0n1 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.841 17:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.414 nvme0n1 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.414 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.676 nvme0n1 00:28:35.676 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.676 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.676 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.676 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.676 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.676 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUxNWM5YzBlYzk1MjU0YWQ4YWNjMTZjYjZjMTVmYTJQVJfL: 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: ]] 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYzY2IxOGUwNWMzODkyNjM4MWVlNjkyYzBhMjhkZmY2NTEyZWM5ZjBlOWVkY2RiNzU3ZWIwNTFmMzRkNDNiNmUyISo=: 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.935 17:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.935 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.505 nvme0n1 00:28:36.505 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.505 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.505 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.505 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.505 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.505 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.766 17:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.337 nvme0n1 00:28:37.337 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.337 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.337 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.337 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.337 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.337 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVjMGNkZmQ0YjJkYjk0ZDkxZTM1ZGJhMmQzMjE2ZDeBO61O: 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmU3ZTE1MTNiNDcyMmMxNmNjZDQ3OGE2Mzk5MzhiYTQOMFmY: 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.598 17:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.171 nvme0n1 00:28:38.171 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.171 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.171 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.171 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.171 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.171 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1Yzk0MDliOTU4MTk4ZmYzZjE0MGUzOTZiZjlhZDFjNTAwZTBmMjJiMGE5ZDI38K7syA==: 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTliMzNhZGYzZjNiYWU1YjQ2MzJlNjYxNDhlMjgyMzT3Te9m: 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.432 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.433 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.433 17:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.003 nvme0n1 00:28:39.003 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.003 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.003 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.003 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.003 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.003 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFkODdkYWY0ZTFlNzUxZmMzYTE5MGE3NGUyYWY1NjA5YzRmMmI4YTlhMmVmY2ZlMTk5NjE0NzI1Mjk4ZjIwOWNELkA=: 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.263 17:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.835 nvme0n1 00:28:39.835 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.835 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.835 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.835 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.835 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.835 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmY1OTUwM2EzMjI3ODA2YTdjOGU4M2FmNjZiMGVjNzQ1ODIyMzY2ZTM3MzU1YWU1R5wt/g==: 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjZGVkN2U3ZmQ1NjBjNGFmMTNlMTdkZDhkYWEzMDNhNGNhYjYyMDNhZTY3ODVjpm9WMg==: 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 request: 00:28:40.097 { 00:28:40.097 "name": "nvme0", 00:28:40.097 "trtype": "tcp", 00:28:40.097 "traddr": "10.0.0.1", 00:28:40.097 "adrfam": "ipv4", 00:28:40.097 "trsvcid": "4420", 00:28:40.097 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:40.097 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:40.097 "prchk_reftag": false, 00:28:40.097 "prchk_guard": false, 00:28:40.097 "hdgst": false, 00:28:40.097 "ddgst": false, 00:28:40.097 "method": "bdev_nvme_attach_controller", 00:28:40.097 "req_id": 1 00:28:40.097 } 00:28:40.097 Got JSON-RPC error response 00:28:40.097 response: 00:28:40.097 { 00:28:40.097 "code": -5, 00:28:40.097 "message": "Input/output error" 00:28:40.097 } 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.097 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.098 request: 00:28:40.098 { 00:28:40.098 "name": "nvme0", 00:28:40.098 "trtype": "tcp", 00:28:40.098 "traddr": "10.0.0.1", 00:28:40.098 "adrfam": "ipv4", 00:28:40.098 "trsvcid": "4420", 00:28:40.098 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:40.098 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:40.098 "prchk_reftag": false, 00:28:40.098 "prchk_guard": false, 00:28:40.098 "hdgst": false, 00:28:40.098 "ddgst": false, 00:28:40.098 "dhchap_key": "key2", 00:28:40.098 "method": "bdev_nvme_attach_controller", 00:28:40.098 "req_id": 1 00:28:40.098 } 00:28:40.098 Got JSON-RPC error response 00:28:40.098 response: 00:28:40.098 { 00:28:40.098 "code": -5, 00:28:40.098 "message": "Input/output error" 00:28:40.098 } 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:40.098 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.359 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.360 request: 00:28:40.360 { 00:28:40.360 "name": "nvme0", 00:28:40.360 "trtype": "tcp", 00:28:40.360 "traddr": "10.0.0.1", 00:28:40.360 "adrfam": "ipv4", 00:28:40.360 "trsvcid": "4420", 00:28:40.360 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:40.360 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:40.360 "prchk_reftag": false, 00:28:40.360 "prchk_guard": false, 00:28:40.360 "hdgst": false, 00:28:40.360 "ddgst": false, 00:28:40.360 "dhchap_key": "key1", 00:28:40.360 "dhchap_ctrlr_key": "ckey2", 00:28:40.360 "method": "bdev_nvme_attach_controller", 00:28:40.360 "req_id": 1 00:28:40.360 } 00:28:40.360 Got JSON-RPC error response 00:28:40.360 response: 00:28:40.360 { 00:28:40.360 "code": -5, 00:28:40.360 "message": "Input/output error" 00:28:40.360 } 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.360 rmmod nvme_tcp 00:28:40.360 rmmod nvme_fabrics 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1583253 ']' 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1583253 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1583253 ']' 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1583253 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583253 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583253' 00:28:40.360 killing process with pid 1583253 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1583253 00:28:40.360 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1583253 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.621 17:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.533 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.533 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:42.533 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:42.533 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:42.533 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:42.533 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:42.794 17:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.098 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:46.098 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:46.098 17:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.P2t /tmp/spdk.key-null.vMn /tmp/spdk.key-sha256.cGA /tmp/spdk.key-sha384.T6g /tmp/spdk.key-sha512.EC4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:46.098 17:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:49.430 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:49.430 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:49.430 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:49.692 00:28:49.692 real 0m57.843s 00:28:49.692 user 0m51.800s 00:28:49.692 sys 0m14.506s 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.692 ************************************ 00:28:49.692 END TEST nvmf_auth_host 00:28:49.692 ************************************ 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.692 ************************************ 00:28:49.692 START TEST nvmf_digest 00:28:49.692 ************************************ 00:28:49.692 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:49.953 * Looking for test storage... 00:28:49.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.953 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.953 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:49.953 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.954 17:08:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.954 17:08:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:56.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:56.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:56.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.549 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.550 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.550 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.811 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.811 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.811 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:56.811 17:08:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:56.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:28:56.811 00:28:56.811 --- 10.0.0.2 ping statistics --- 00:28:56.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.811 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.480 ms 00:28:56.811 00:28:56.811 --- 10.0.0.1 ping statistics --- 00:28:56.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.811 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:56.811 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.072 ************************************ 00:28:57.072 START TEST nvmf_digest_clean 00:28:57.072 ************************************ 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1599772 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1599772 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1599772 ']' 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.072 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:57.072 [2024-07-25 17:08:17.189890] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:28:57.072 [2024-07-25 17:08:17.189948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.072 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.072 [2024-07-25 17:08:17.260974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.072 [2024-07-25 17:08:17.335785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.072 [2024-07-25 17:08:17.335827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.072 [2024-07-25 17:08:17.335834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.072 [2024-07-25 17:08:17.335840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.072 [2024-07-25 17:08:17.335846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.072 [2024-07-25 17:08:17.335864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.015 17:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.015 null0 00:28:58.015 [2024-07-25 17:08:18.062558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.015 [2024-07-25 17:08:18.086754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1599902 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1599902 /var/tmp/bperf.sock 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1599902 ']' 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.015 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:58.015 [2024-07-25 17:08:18.138401] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:28:58.015 [2024-07-25 17:08:18.138447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599902 ] 00:28:58.015 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.015 [2024-07-25 17:08:18.212794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.015 [2024-07-25 17:08:18.276891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.958 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.958 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:58.958 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:58.958 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:58.958 17:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.958 17:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.958 17:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.218 nvme0n1 00:28:59.478 17:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:59.478 17:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.478 Running I/O for 2 seconds... 00:29:01.396 00:29:01.396 Latency(us) 00:29:01.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:01.396 nvme0n1 : 2.01 20900.87 81.64 0.00 0.00 6117.21 3181.23 13926.40 00:29:01.396 =================================================================================================================== 00:29:01.397 Total : 20900.87 81.64 0.00 0.00 6117.21 3181.23 13926.40 00:29:01.397 0 00:29:01.397 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:01.397 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:01.397 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.397 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.397 | select(.opcode=="crc32c") 00:29:01.397 | "\(.module_name) \(.executed)"' 00:29:01.397 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1599902 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1599902 ']' 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1599902 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599902 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599902' 00:29:01.658 killing process with pid 1599902 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1599902 00:29:01.658 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.658 00:29:01.658 Latency(us) 00:29:01.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.658 =================================================================================================================== 00:29:01.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1599902 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:01.658 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1600604 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1600604 /var/tmp/bperf.sock 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1600604 ']' 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.920 17:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.920 [2024-07-25 17:08:21.979922] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:01.920 [2024-07-25 17:08:21.979975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600604 ] 00:29:01.920 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.920 Zero copy mechanism will not be used. 00:29:01.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.920 [2024-07-25 17:08:22.054712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.920 [2024-07-25 17:08:22.107321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.493 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.493 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:02.493 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:02.493 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:02.493 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:02.754 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.754 17:08:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.015 nvme0n1 00:29:03.015 17:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:03.015 17:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.015 Zero copy mechanism will not be used. 00:29:03.015 Running I/O for 2 seconds... 00:29:05.579 00:29:05.579 Latency(us) 00:29:05.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.580 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:05.580 nvme0n1 : 2.00 2581.66 322.71 0.00 0.00 6194.46 1262.93 11851.09 00:29:05.580 =================================================================================================================== 00:29:05.580 Total : 2581.66 322.71 0.00 0.00 6194.46 1262.93 11851.09 00:29:05.580 0 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:05.580 | select(.opcode=="crc32c") 00:29:05.580 | "\(.module_name) \(.executed)"' 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1600604 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1600604 ']' 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1600604 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600604 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600604' 00:29:05.580 killing process with pid 1600604 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1600604 00:29:05.580 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.580 00:29:05.580 Latency(us) 00:29:05.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.580 =================================================================================================================== 00:29:05.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1600604 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1601319 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1601319 /var/tmp/bperf.sock 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1601319 ']' 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:05.580 [2024-07-25 17:08:25.610783] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:05.580 [2024-07-25 17:08:25.610832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601319 ] 00:29:05.580 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.580 [2024-07-25 17:08:25.652482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.580 [2024-07-25 17:08:25.705903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:05.580 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.841 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.841 17:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.101 nvme0n1 00:29:06.101 17:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:06.101 17:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.101 Running I/O for 2 seconds... 00:29:08.016 00:29:08.016 Latency(us) 00:29:08.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.016 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.016 nvme0n1 : 2.00 21860.56 85.39 0.00 0.00 5847.66 5133.65 19551.57 00:29:08.016 =================================================================================================================== 00:29:08.016 Total : 21860.56 85.39 0.00 0.00 5847.66 5133.65 19551.57 00:29:08.016 0 00:29:08.016 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:08.016 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:08.277 | select(.opcode=="crc32c") 00:29:08.277 | "\(.module_name) \(.executed)"' 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1601319 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1601319 ']' 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1601319 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601319 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601319' 00:29:08.277 killing process with pid 1601319 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1601319 00:29:08.277 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.277 00:29:08.277 Latency(us) 00:29:08.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.277 =================================================================================================================== 00:29:08.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.277 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1601319 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1601943 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1601943 /var/tmp/bperf.sock 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1601943 ']' 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.539 [2024-07-25 17:08:28.630583] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:08.539 [2024-07-25 17:08:28.630630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601943 ] 00:29:08.539 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.539 Zero copy mechanism will not be used. 00:29:08.539 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.539 [2024-07-25 17:08:28.671948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.539 [2024-07-25 17:08:28.726256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:08.539 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:08.801 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.801 17:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.063 nvme0n1 00:29:09.063 17:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:09.063 17:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.325 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.325 Zero copy mechanism will not be used. 00:29:09.325 Running I/O for 2 seconds... 00:29:11.279 00:29:11.279 Latency(us) 00:29:11.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:11.279 nvme0n1 : 2.01 2258.27 282.28 0.00 0.00 7071.89 5515.95 25886.72 00:29:11.279 =================================================================================================================== 00:29:11.279 Total : 2258.27 282.28 0.00 0.00 7071.89 5515.95 25886.72 00:29:11.279 0 00:29:11.279 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:11.279 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:11.279 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:11.279 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:11.279 | select(.opcode=="crc32c") 00:29:11.279 | "\(.module_name) \(.executed)"' 00:29:11.279 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1601943 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1601943 ']' 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1601943 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601943 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601943' 00:29:11.540 killing process with pid 1601943 00:29:11.540 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1601943 00:29:11.540 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.540 00:29:11.540 Latency(us) 00:29:11.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.541 =================================================================================================================== 00:29:11.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1601943 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1599772 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1599772 ']' 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1599772 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599772 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599772' 00:29:11.541 killing process with pid 1599772 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1599772 00:29:11.541 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1599772 00:29:11.801 00:29:11.801 real 0m14.813s 00:29:11.801 user 0m28.631s 00:29:11.801 sys 0m3.064s 00:29:11.801 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:11.801 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.801 ************************************ 00:29:11.801 END TEST nvmf_digest_clean 00:29:11.801 ************************************ 00:29:11.801 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:11.801 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:11.801 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.801 17:08:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:11.801 ************************************ 00:29:11.801 START TEST nvmf_digest_error 00:29:11.801 ************************************ 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1602651 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1602651 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1602651 ']' 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.801 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.062 [2024-07-25 17:08:32.078783] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:12.062 [2024-07-25 17:08:32.078830] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.062 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.062 [2024-07-25 17:08:32.144240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.062 [2024-07-25 17:08:32.205206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.062 [2024-07-25 17:08:32.205245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.062 [2024-07-25 17:08:32.205252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.062 [2024-07-25 17:08:32.205258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.062 [2024-07-25 17:08:32.205264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.062 [2024-07-25 17:08:32.205282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.633 [2024-07-25 17:08:32.895249] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.633 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.894 null0 00:29:12.894 [2024-07-25 17:08:32.971986] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.894 [2024-07-25 17:08:32.996184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:12.894 17:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1602899 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1602899 /var/tmp/bperf.sock 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1602899 ']' 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.894 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:12.894 [2024-07-25 17:08:33.051052] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:12.894 [2024-07-25 17:08:33.051100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602899 ] 00:29:12.894 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.894 [2024-07-25 17:08:33.125811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.154 [2024-07-25 17:08:33.180289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.724 17:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.295 nvme0n1 00:29:14.295 17:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:14.295 17:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.295 17:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.295 17:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.295 17:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:14.295 17:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.295 Running I/O for 2 seconds... 00:29:14.295 [2024-07-25 17:08:34.399272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.295 [2024-07-25 17:08:34.399304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.295 [2024-07-25 17:08:34.399313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.295 [2024-07-25 17:08:34.412428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.295 [2024-07-25 17:08:34.412449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.295 [2024-07-25 17:08:34.412456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.295 [2024-07-25 17:08:34.426487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.295 [2024-07-25 17:08:34.426506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.295 [2024-07-25 17:08:34.426512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.295 [2024-07-25 17:08:34.439248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.295 [2024-07-25 17:08:34.439266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.295 [2024-07-25 17:08:34.439277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.295 [2024-07-25 17:08:34.450167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.295 [2024-07-25 17:08:34.450184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.295 [2024-07-25 17:08:34.450191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.295 [2024-07-25 17:08:34.463127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.463144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.463151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.476172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.476189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.476196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.488386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.488403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.488410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.500893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.500910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.500917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.513256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.513273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.513279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.526872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.526890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.526896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.538720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.538737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.538743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.552059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.552082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.296 [2024-07-25 17:08:34.564083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.296 [2024-07-25 17:08:34.564099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.296 [2024-07-25 17:08:34.564106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.577146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.577163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.577170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.589568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.589585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.589591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.601756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.601773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.601780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.614739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.614756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.614762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.627042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.627059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.627066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.639010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.639027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.639033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.650993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.651010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.651019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.664649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.664666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.664672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.676926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.676943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.676949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.689324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.689340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.689347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.702536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.702553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.702560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.713959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.713976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.713983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.727100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.727116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.739544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.739561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.739568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.751285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.751303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.751309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.764022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.764043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.764050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.775906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.775923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.775930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.789884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.789901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.789908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.802226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.802243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.802249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.813956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.558 [2024-07-25 17:08:34.813973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.558 [2024-07-25 17:08:34.813979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.558 [2024-07-25 17:08:34.826634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.559 [2024-07-25 17:08:34.826651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.559 [2024-07-25 17:08:34.826657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.820 [2024-07-25 17:08:34.838836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.838853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.838860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.852188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.852208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.852215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.864908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.864925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.864931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.876569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.876585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.876592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.888679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.888695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.888701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.902362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.902379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.902385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.913866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.913883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.913889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.927902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.927919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.927926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.939218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.939234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.939240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.951879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.951896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.951902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.964635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.964697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.964704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.976472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.976489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.976498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:34.989794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:34.989810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:34.989817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.002244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.002260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.002267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.014499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.014516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.014522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.026580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.026596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.026602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.038409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.038425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.038431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.052542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.052558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.052564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.064455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.064472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.076769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.076786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.076792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.821 [2024-07-25 17:08:35.088616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:14.821 [2024-07-25 17:08:35.088633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.821 [2024-07-25 17:08:35.088639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.102820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.102837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.102843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.114665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.114683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.114690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.127180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.127197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.127206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.139797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.139814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.139821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.151914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.151931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.151937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.164706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.164724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.164731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.178190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.178211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.178217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.189706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.189722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.189732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.201862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.201878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.201884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.214429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.214445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.226796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.226813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.226819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.239474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.239490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.239497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.251735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.251752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.083 [2024-07-25 17:08:35.251758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.083 [2024-07-25 17:08:35.263939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.083 [2024-07-25 17:08:35.263956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.263962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.275933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.275949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.275956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.289648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.289664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.289671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.301934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.301957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.301963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.314645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.314661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.314668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.326524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.326541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.326547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.339044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.339061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.339067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.084 [2024-07-25 17:08:35.352312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.084 [2024-07-25 17:08:35.352329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.084 [2024-07-25 17:08:35.352335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.364712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.364730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.364736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.376528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.376545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.376552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.388892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.388909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.388916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.401179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.401197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.401207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.414531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.414548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.414555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.425886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.425904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.425910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.439303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.439321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.439327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.452733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.452750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.452756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.464316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.464335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.464342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.476215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.476233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.476239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.488907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.488925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.488932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.501452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.501469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.501476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.513956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.513973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.513982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.527178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.527195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.527206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.538838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.538856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.538862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.550912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.550929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.550936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.564190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.564210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.564216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.576842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.576860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.576867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.588926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.588943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.588949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.345 [2024-07-25 17:08:35.601212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.345 [2024-07-25 17:08:35.601229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.345 [2024-07-25 17:08:35.601235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.346 [2024-07-25 17:08:35.613647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.346 [2024-07-25 17:08:35.613663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.346 [2024-07-25 17:08:35.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.606 [2024-07-25 17:08:35.625964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.606 [2024-07-25 17:08:35.625985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.606 [2024-07-25 17:08:35.625992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.606 [2024-07-25 17:08:35.638272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.606 [2024-07-25 17:08:35.638290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.606 [2024-07-25 17:08:35.638296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.606 [2024-07-25 17:08:35.651639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.606 [2024-07-25 17:08:35.651656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.606 [2024-07-25 17:08:35.651662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.606 [2024-07-25 17:08:35.664113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.606 [2024-07-25 17:08:35.664130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.606 [2024-07-25 17:08:35.664136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.606 [2024-07-25 17:08:35.676117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.606 [2024-07-25 17:08:35.676135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.606 [2024-07-25 17:08:35.676141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.689037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.689054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.689060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.701135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.701158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.712760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.712778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.712784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.727100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.727117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.738408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.738425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.738432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.751281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.751299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.751305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.763969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.763986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.763992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.777529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.777547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.777553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.789626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.789643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.789649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.800854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.800872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.800878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.814097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.814115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.814122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.826361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.826379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.826386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.838472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.838493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.838500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.850446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.850470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.863659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.863676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.863683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.607 [2024-07-25 17:08:35.875574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.607 [2024-07-25 17:08:35.875592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.607 [2024-07-25 17:08:35.875598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.888192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.888214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.888221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.901490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.901507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.914466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.914484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.914490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.927031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.927049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.938983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.939001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.939008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.951544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.951561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.951568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.963887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.963905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.963912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.976396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.976413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.976420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:35.988677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:35.988695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:35.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:36.001075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:36.001094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:36.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:36.013415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:36.013432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.868 [2024-07-25 17:08:36.013439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.868 [2024-07-25 17:08:36.025625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.868 [2024-07-25 17:08:36.025643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.025649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.037909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.037926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.037933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.050690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.050707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.050717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.063925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.063942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.063949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.075251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.075269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.075276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.088611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.088629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.088635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.100665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.100682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.100689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.113786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.113803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.113810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.126530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.126548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.126554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.869 [2024-07-25 17:08:36.138082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:15.869 [2024-07-25 17:08:36.138100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.869 [2024-07-25 17:08:36.138107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.150611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.150629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.150636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.163728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.163749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.163755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.175993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.176011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.176017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.187724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.187741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.187748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.201070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.201088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.201095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.213497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.213514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.213521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.225646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.225663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.225670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.239861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.239878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.239884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.251975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.251993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.252000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.264008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.264026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.264033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.276863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.276881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.276887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.289148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.289166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.289172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.301215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.301233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.313347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.313364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.313371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.325640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.325657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.325664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.339342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.339359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.339366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.350531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.350548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.350554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.364009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.364026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.364033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 [2024-07-25 17:08:36.376538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d3cd0) 00:29:16.130 [2024-07-25 17:08:36.376556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.130 [2024-07-25 17:08:36.376565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.130 00:29:16.130 Latency(us) 00:29:16.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.130 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:16.130 nvme0n1 : 2.00 20283.91 79.23 0.00 0.00 6303.45 4041.39 21954.56 00:29:16.130 =================================================================================================================== 00:29:16.130 Total : 20283.91 79.23 0.00 0.00 6303.45 4041.39 21954.56 00:29:16.130 0 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:16.390 | .driver_specific 00:29:16.390 | .nvme_error 00:29:16.390 | .status_code 00:29:16.390 | .command_transient_transport_error' 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1602899 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1602899 ']' 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1602899 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602899 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602899' 00:29:16.390 killing process with pid 1602899 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1602899 00:29:16.390 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.390 00:29:16.390 Latency(us) 00:29:16.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.390 =================================================================================================================== 00:29:16.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.390 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1602899 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1603677 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1603677 /var/tmp/bperf.sock 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1603677 ']' 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.650 17:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.650 [2024-07-25 17:08:36.795426] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:16.650 [2024-07-25 17:08:36.795482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603677 ] 00:29:16.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.650 Zero copy mechanism will not be used. 00:29:16.650 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.650 [2024-07-25 17:08:36.868737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.650 [2024-07-25 17:08:36.921711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.594 17:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.853 nvme0n1 00:29:17.853 17:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:17.853 17:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.853 17:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.113 17:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.113 17:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:18.113 17:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.113 Zero copy mechanism will not be used. 00:29:18.113 Running I/O for 2 seconds... 00:29:18.113 [2024-07-25 17:08:38.232098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.113 [2024-07-25 17:08:38.232131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-07-25 17:08:38.232140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.113 [2024-07-25 17:08:38.247543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.113 [2024-07-25 17:08:38.247566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-07-25 17:08:38.247574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.113 [2024-07-25 17:08:38.264358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.113 [2024-07-25 17:08:38.264377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.264384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.280517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.280536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.280543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.296720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.296739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.296746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.316908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.316928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.316935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.333596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.333615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.333621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.345069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.345089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.345096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.361829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.361848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.361854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.114 [2024-07-25 17:08:38.379048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.114 [2024-07-25 17:08:38.379066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-07-25 17:08:38.379072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.374 [2024-07-25 17:08:38.394068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.374 [2024-07-25 17:08:38.394087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.374 [2024-07-25 17:08:38.394094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.374 [2024-07-25 17:08:38.409609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.374 [2024-07-25 17:08:38.409628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.374 [2024-07-25 17:08:38.409634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.374 [2024-07-25 17:08:38.426405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.374 [2024-07-25 17:08:38.426424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.374 [2024-07-25 17:08:38.426431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.374 [2024-07-25 17:08:38.441812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.374 [2024-07-25 17:08:38.441830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.374 [2024-07-25 17:08:38.441837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.374 [2024-07-25 17:08:38.459571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.459596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.475887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.475905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.475911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.492591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.492610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.492622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.508983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.509001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.525073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.525091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.525097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.541846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.541865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.541871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.557523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.557541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.557547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.574553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.574571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.574577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.592478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.592496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.592502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.609582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.609600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.609606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.627073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.627091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.627097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.375 [2024-07-25 17:08:38.643939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.375 [2024-07-25 17:08:38.643958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-25 17:08:38.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.635 [2024-07-25 17:08:38.659995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.635 [2024-07-25 17:08:38.660013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.635 [2024-07-25 17:08:38.660019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.635 [2024-07-25 17:08:38.677549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.635 [2024-07-25 17:08:38.677566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.635 [2024-07-25 17:08:38.677573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.635 [2024-07-25 17:08:38.690968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.635 [2024-07-25 17:08:38.690986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.635 [2024-07-25 17:08:38.690992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.635 [2024-07-25 17:08:38.708983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.635 [2024-07-25 17:08:38.709001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.635 [2024-07-25 17:08:38.709007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.635 [2024-07-25 17:08:38.727325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.727343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.727349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.741968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.741986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.741992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.759042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.759060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.759066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.776766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.776784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.776793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.797780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.797799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.797806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.811151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.811169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.811176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.825466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.825485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.825492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.838371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.838389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.838395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.852658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.852676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.852683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.867139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.867157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.867164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.881259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.891514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.891532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.891539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.636 [2024-07-25 17:08:38.904321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.636 [2024-07-25 17:08:38.904342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-25 17:08:38.904349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:38.916706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:38.916725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:38.916731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:38.933715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:38.933733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:38.933739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:38.948871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:38.948889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:38.948896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:38.962488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:38.962506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:38.962512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:38.978079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:38.978098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:38.978104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:38.995451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:38.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:38.995476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:39.014160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:39.014178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:39.014185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:39.028935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:39.028953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:39.028959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:39.045464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:39.045483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:39.045489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:39.063684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:39.063703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:39.063709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:39.079525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:39.079543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.896 [2024-07-25 17:08:39.079550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.896 [2024-07-25 17:08:39.095455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.896 [2024-07-25 17:08:39.095474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.897 [2024-07-25 17:08:39.095481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.897 [2024-07-25 17:08:39.112925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.897 [2024-07-25 17:08:39.112945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.897 [2024-07-25 17:08:39.112951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.897 [2024-07-25 17:08:39.129915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.897 [2024-07-25 17:08:39.129934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.897 [2024-07-25 17:08:39.129941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.897 [2024-07-25 17:08:39.144899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.897 [2024-07-25 17:08:39.144918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.897 [2024-07-25 17:08:39.144924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.897 [2024-07-25 17:08:39.161041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:18.897 [2024-07-25 17:08:39.161060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.897 [2024-07-25 17:08:39.161066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.177698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.177717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.177727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.192936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.192955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.192962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.209887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.209906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.209912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.227502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.227520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.227527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.243921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.243940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.243946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.258759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.258777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.258784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.274018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.274036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.274043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.292392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.292411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.292417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.308468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.308487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.308493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.324451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.324472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.324478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.340558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.340577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.340584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.357317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.357335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.357342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.373793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.373812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.373818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.390471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.390489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.390495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.406142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.406161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.406167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.157 [2024-07-25 17:08:39.423905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.157 [2024-07-25 17:08:39.423923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.157 [2024-07-25 17:08:39.423930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.440899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.440918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.440924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.457490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.457509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.457515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.474105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.474124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.474130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.489761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.489779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.489785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.506512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.506530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.506537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.523087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.523106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.523114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.540195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.540218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.540225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.557757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.557776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.557783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.574546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.574565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.574572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.590073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.590092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.590098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.607590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.607609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.607618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.624431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.624451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.418 [2024-07-25 17:08:39.624457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.418 [2024-07-25 17:08:39.638395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.418 [2024-07-25 17:08:39.638416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.419 [2024-07-25 17:08:39.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.419 [2024-07-25 17:08:39.653273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.419 [2024-07-25 17:08:39.653292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.419 [2024-07-25 17:08:39.653299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.419 [2024-07-25 17:08:39.666678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.419 [2024-07-25 17:08:39.666698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.419 [2024-07-25 17:08:39.666704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.419 [2024-07-25 17:08:39.680279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.419 [2024-07-25 17:08:39.680298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.419 [2024-07-25 17:08:39.680304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.698131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.698150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.698156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.713221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.713240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.713247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.729945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.729964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.729971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.746767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.746786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.746792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.763252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.763272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.763278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.779328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.779347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.779354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.796106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.796125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.811852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.811871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.811877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.826862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.826881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.826887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.843224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.843242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.843248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.860102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.860121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.860128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.876423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.876442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.876452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.891468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.891489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.891496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.907498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.907518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.907524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.924800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.924818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.924824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.680 [2024-07-25 17:08:39.940817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.680 [2024-07-25 17:08:39.940837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.680 [2024-07-25 17:08:39.940843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:39.959303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:39.959322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:39.959328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:39.974311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:39.974330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:39.974337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:39.989176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:39.989196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:39.989207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.007537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.007556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.007563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.028500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.028522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.028529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.044896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.044915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.044923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.060479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.060498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.060505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.079513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.079532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.079538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.097398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.097417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.097424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.112911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.112929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.112936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.129142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.129162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.129169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.146197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.146219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.146226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.163497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.163515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.163521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.179304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.179324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.942 [2024-07-25 17:08:40.179330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.942 [2024-07-25 17:08:40.195679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.942 [2024-07-25 17:08:40.195698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.943 [2024-07-25 17:08:40.195705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.943 [2024-07-25 17:08:40.211576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cd89f0) 00:29:19.943 [2024-07-25 17:08:40.211594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.943 [2024-07-25 17:08:40.211601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 00:29:19.943 Latency(us) 00:29:19.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.943 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:19.943 nvme0n1 : 2.00 1905.80 238.22 0.00 0.00 8388.72 2252.80 22609.92 00:29:19.943 =================================================================================================================== 00:29:19.943 Total : 1905.80 238.22 0.00 0.00 8388.72 2252.80 22609.92 00:29:19.943 0 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:20.203 | .driver_specific 00:29:20.203 | .nvme_error 00:29:20.203 | .status_code 00:29:20.203 | .command_transient_transport_error' 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1603677 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1603677 ']' 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1603677 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603677 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603677' 00:29:20.203 killing process with pid 1603677 00:29:20.203 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1603677 00:29:20.203 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.203 00:29:20.203 Latency(us) 00:29:20.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.204 =================================================================================================================== 00:29:20.204 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.204 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1603677 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1604363 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1604363 /var/tmp/bperf.sock 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1604363 ']' 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.465 17:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.465 [2024-07-25 17:08:40.623297] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:20.465 [2024-07-25 17:08:40.623363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604363 ] 00:29:20.465 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.465 [2024-07-25 17:08:40.711936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.725 [2024-07-25 17:08:40.782290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.297 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.870 nvme0n1 00:29:21.870 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:21.870 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.870 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.870 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.870 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:21.870 17:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.870 Running I/O for 2 seconds... 00:29:21.870 [2024-07-25 17:08:42.002119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.002989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.003025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.014555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.014850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.014880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.026922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.027341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.027367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.039318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.039750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.039774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.051647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.052111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.052136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.064028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.064516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.076369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.076800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.076824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.088837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.089281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.089305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.101192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.101521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.101546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.113544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.113902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.113926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.125904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.126232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.126258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.870 [2024-07-25 17:08:42.138191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:21.870 [2024-07-25 17:08:42.138667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.870 [2024-07-25 17:08:42.138692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.150544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.150878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.150903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.162839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.163328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.163352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.175160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.175590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.175618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.187477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.187925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.187950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.199834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.200139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.200164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.212146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.212619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.212643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.224464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.224791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.224815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.236770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.237105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.237128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.249047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.249466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.249490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.261371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.261822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.261845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.273711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.274041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.274064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.285964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.286442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.286467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.298229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.298655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.298678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.310546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.310942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.310965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.322856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.323358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.323381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.335134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.335574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.335597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.347586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.348102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.348127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.359909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.360294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.132 [2024-07-25 17:08:42.360320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.132 [2024-07-25 17:08:42.372261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.132 [2024-07-25 17:08:42.372696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.133 [2024-07-25 17:08:42.372720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.133 [2024-07-25 17:08:42.384604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.133 [2024-07-25 17:08:42.384987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.133 [2024-07-25 17:08:42.385010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.133 [2024-07-25 17:08:42.396985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.133 [2024-07-25 17:08:42.397485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.133 [2024-07-25 17:08:42.397510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.409297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.409627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.409650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.421609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.421959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.421984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.433905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.434216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.434240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.446218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.446535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.446559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.458563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.458883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.458908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.470867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.471324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.471349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.483169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.483495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.483520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.495505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.495925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.495954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.507807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.508274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.508299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.520086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.520546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.520569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.532412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.532733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.544736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.545185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.545215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.557051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.557509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.569341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.569798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.569822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.581627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.582077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.582095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.593952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.594397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.594413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.606283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.606718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.606734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.618583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.619008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.619024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.630925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.631289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.394 [2024-07-25 17:08:42.631305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.394 [2024-07-25 17:08:42.643268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.394 [2024-07-25 17:08:42.643587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.395 [2024-07-25 17:08:42.643603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.395 [2024-07-25 17:08:42.655611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.395 [2024-07-25 17:08:42.656152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.395 [2024-07-25 17:08:42.656168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.667946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.668277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.668293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.680224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.680553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.680569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.692587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.692909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.692924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.704804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.705258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.705274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.717158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.717648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.717663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.729462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.729812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.729828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.741762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.742178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.742193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.754044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.754417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.754433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.766341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.766792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.766808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.778690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.779125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.779141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.791015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.791536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.791551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.803308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.803760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.803776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.815672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.816123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.656 [2024-07-25 17:08:42.816141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.656 [2024-07-25 17:08:42.827978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.656 [2024-07-25 17:08:42.828456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.828472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.840420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.840737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.840753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.852707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.853015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.853031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.865033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.865369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.865385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.877340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.877654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.877670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.889686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.890076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.890092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.902005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.902453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.902469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.914341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.914753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.914769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.657 [2024-07-25 17:08:42.926633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.657 [2024-07-25 17:08:42.926968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.657 [2024-07-25 17:08:42.926984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:42.938973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:42.939309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:42.939325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:42.951296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:42.951612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:42.951628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:42.963668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:42.964005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:42.964020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:42.976156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:42.976594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:42.976609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:42.988522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:42.988836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:42.988852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.000847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.001182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.001197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.013206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.013548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.013564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.025518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.025821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.025837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.037818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.038253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.038269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.050165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.050631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.050647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.062500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.062950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.062966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.919 [2024-07-25 17:08:43.074865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.919 [2024-07-25 17:08:43.075299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.919 [2024-07-25 17:08:43.075315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.087246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.087597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.087613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.099657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.099986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.100001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.111950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.112397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.112414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.124238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.124633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.124649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.136602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.137062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.137080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.148923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.149350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.149366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.161269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.161578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.161593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.173569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.174042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.174058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.920 [2024-07-25 17:08:43.185905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:22.920 [2024-07-25 17:08:43.186393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.920 [2024-07-25 17:08:43.186410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.198216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.198720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.198735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.210529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.211024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.222886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.223207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.223223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.235249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.235665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.235680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.247527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.248023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.248039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.259879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.260405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.272217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.272637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.272653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.284493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.284938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.284954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.296826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.297147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.297163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.309167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.309639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.309655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.321458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.321900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.321916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.333790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.334109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.334125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.346061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.346372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.346388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.358391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.358710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.358726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.370661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.370961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.370976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.382954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.383403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.383419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.395298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.395617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.395633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.407586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.408032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.408047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.419910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.420377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.420394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.432230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.432544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.432560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.182 [2024-07-25 17:08:43.444511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.182 [2024-07-25 17:08:43.444999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.182 [2024-07-25 17:08:43.445014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.456875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.457243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.457262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.469184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.469652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.469669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.481520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.481867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.481883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.493821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.494132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.494149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.506172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.506664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.506680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.518455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.518764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.518780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.530786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.531165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.531180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.543070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.543552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.543567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.555398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.555827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.555843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.567752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.568176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.568192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.580064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.580490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.580507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.592405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.592860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.592875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.604711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.605151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.605166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.617047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.617511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.617527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.629364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.629818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.629834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.641682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.642044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.642060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.654055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.654521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.654537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.666359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.666678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.666694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.678735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.679182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.679198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.691032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.691521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.691537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.703390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.703760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.703776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.444 [2024-07-25 17:08:43.715703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.444 [2024-07-25 17:08:43.716053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.444 [2024-07-25 17:08:43.716069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.728034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.728556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.728572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.740371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.740824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.740840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.752711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.753113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.753128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.764992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.765424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.765440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.777306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.777802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.789621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.790066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.790081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.801973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.802391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.802406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.814277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.814686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.814701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.826566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.827024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.827040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.838854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.714 [2024-07-25 17:08:43.839260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.714 [2024-07-25 17:08:43.839276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.714 [2024-07-25 17:08:43.851178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.851602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.851618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.863596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.864076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.875882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.876323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.876339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.888262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.888727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.888743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.900572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.901025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.901041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.912904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.913345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.913360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.925227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.925661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.925677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.937563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.938081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.938096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.949866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.962151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.962502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.962519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.715 [2024-07-25 17:08:43.974659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:23.715 [2024-07-25 17:08:43.975215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.715 [2024-07-25 17:08:43.975231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.014 [2024-07-25 17:08:43.987004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c0c0) with pdu=0x2000190fd640 00:29:24.014 [2024-07-25 17:08:43.987336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.014 [2024-07-25 17:08:43.987351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.014 00:29:24.014 Latency(us) 00:29:24.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.014 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.014 nvme0n1 : 2.01 20638.45 80.62 0.00 0.00 6189.25 3031.04 15291.73 00:29:24.014 =================================================================================================================== 00:29:24.014 Total : 20638.45 80.62 0.00 0.00 6189.25 3031.04 15291.73 00:29:24.014 0 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.014 | .driver_specific 00:29:24.014 | .nvme_error 00:29:24.014 | .status_code 00:29:24.014 | .command_transient_transport_error' 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1604363 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1604363 ']' 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1604363 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1604363 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1604363' 00:29:24.014 killing process with pid 1604363 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1604363 00:29:24.014 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.014 00:29:24.014 Latency(us) 00:29:24.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.014 =================================================================================================================== 00:29:24.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.014 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1604363 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1605051 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1605051 /var/tmp/bperf.sock 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1605051 ']' 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.280 17:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.280 [2024-07-25 17:08:44.395618] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:24.280 [2024-07-25 17:08:44.395673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605051 ] 00:29:24.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.280 Zero copy mechanism will not be used. 00:29:24.280 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.280 [2024-07-25 17:08:44.468535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.280 [2024-07-25 17:08:44.521590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.222 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.484 nvme0n1 00:29:25.484 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:25.484 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.484 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.746 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.746 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:25.746 17:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.746 Zero copy mechanism will not be used. 00:29:25.746 Running I/O for 2 seconds... 00:29:25.746 [2024-07-25 17:08:45.862704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.746 [2024-07-25 17:08:45.863000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.746 [2024-07-25 17:08:45.863028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.746 [2024-07-25 17:08:45.876402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.746 [2024-07-25 17:08:45.876685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.746 [2024-07-25 17:08:45.876709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.746 [2024-07-25 17:08:45.889377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.889609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.889627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.902836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.903113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.903134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.916506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.916749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.930014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.930286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.930308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.943738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.944006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.944027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.958783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.959073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.973052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.973331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.973350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:45.987194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:45.987356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:45.987373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:46.001421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:46.001690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:46.001711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.747 [2024-07-25 17:08:46.014922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:25.747 [2024-07-25 17:08:46.015197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.747 [2024-07-25 17:08:46.015221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.029113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.029381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.029403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.042847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.043113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.043132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.057092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.057367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.057386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.069835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.070103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.070122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.083605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.083872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.083891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.096805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.097073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.097091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.110635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.110910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.110930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.124825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.125091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.125108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.138337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.138559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.138576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.151989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.152266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.152286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.166096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.009 [2024-07-25 17:08:46.166376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.009 [2024-07-25 17:08:46.166395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.009 [2024-07-25 17:08:46.178956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.179227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.179247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.192883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.193151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.193170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.205913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.206180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.206211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.219217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.219485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.219503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.232701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.232971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.232992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.246245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.246514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.246534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.258274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.258441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.258459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.010 [2024-07-25 17:08:46.271931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.010 [2024-07-25 17:08:46.272206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.010 [2024-07-25 17:08:46.272225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.272 [2024-07-25 17:08:46.284519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.272 [2024-07-25 17:08:46.284789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.272 [2024-07-25 17:08:46.284808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.272 [2024-07-25 17:08:46.298448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.272 [2024-07-25 17:08:46.298718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.272 [2024-07-25 17:08:46.298737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.272 [2024-07-25 17:08:46.311078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.272 [2024-07-25 17:08:46.311351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.272 [2024-07-25 17:08:46.311371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.272 [2024-07-25 17:08:46.324953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.272 [2024-07-25 17:08:46.325230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.272 [2024-07-25 17:08:46.325251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.272 [2024-07-25 17:08:46.338157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.272 [2024-07-25 17:08:46.338428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.272 [2024-07-25 17:08:46.338448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.272 [2024-07-25 17:08:46.352099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.352370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.352390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.365219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.365496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.365514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.378879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.379041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.379059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.393349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.393615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.393633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.407698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.407964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.420799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.421065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.421084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.435148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.435415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.435435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.449523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.449792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.449811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.463475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.463743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.463762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.477618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.477886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.477904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.492179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.492458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.492477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.506800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.507068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.507088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.520161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.520428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.273 [2024-07-25 17:08:46.534173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.273 [2024-07-25 17:08:46.534454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.273 [2024-07-25 17:08:46.534474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.547208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.547477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.547496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.560834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.561109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.561131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.574472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.574630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.574647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.588525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.588793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.588813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.600987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.601259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.601278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.614671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.614937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.614956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.536 [2024-07-25 17:08:46.627324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.536 [2024-07-25 17:08:46.627594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-07-25 17:08:46.627613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.640659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.640935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.640954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.653580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.653846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.653865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.666331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.666597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.666617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.680706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.680981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.681002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.694570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.694837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.694857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.708147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.708417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.708437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.721946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.722218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.722236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.734382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.734652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.734669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.747499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.747728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.747745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.761405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.761674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.761693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.774273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.774557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.774576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.787065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.787338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.787359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.537 [2024-07-25 17:08:46.800437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.537 [2024-07-25 17:08:46.800710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.537 [2024-07-25 17:08:46.800730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.813990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.814154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.826786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.827053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.827073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.840505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.840776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.840795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.854166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.854443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.867682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.867951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.867969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.880938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.881211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.881231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.893511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.893778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.893797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.906614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.906884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.906906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.919351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.919619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.919638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.931779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.932045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.932064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.944309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.944579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.944599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.957785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.958055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.958075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.971710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.971978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.971998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:46.986617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:46.986884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:46.986903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:47.000571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:47.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:47.000889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:47.014838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:47.015104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:47.015123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:47.029093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:47.029369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:47.029388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:47.042510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:47.042778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:47.042798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:47.055918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.800 [2024-07-25 17:08:47.056185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.800 [2024-07-25 17:08:47.056208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.800 [2024-07-25 17:08:47.070064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:26.801 [2024-07-25 17:08:47.070343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.801 [2024-07-25 17:08:47.070370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.083621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.083863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.083880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.097722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.097992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.098011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.111463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.111735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.111754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.125463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.125729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.125749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.138943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.139213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.139234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.152384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.152650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.152669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.166208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.166477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.166497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.180049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.180320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.193570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.193747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.193765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.207622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.207886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.207906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.221209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.221479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.235085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.235357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.235376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.249590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.249861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.249880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.262274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.262545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.262568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.276928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.277193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.277218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.290626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.290786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.290807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.063 [2024-07-25 17:08:47.304427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.063 [2024-07-25 17:08:47.304693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-07-25 17:08:47.304712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.064 [2024-07-25 17:08:47.317855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.064 [2024-07-25 17:08:47.318121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.064 [2024-07-25 17:08:47.318141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.064 [2024-07-25 17:08:47.331150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.064 [2024-07-25 17:08:47.331417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.064 [2024-07-25 17:08:47.331437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.344967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.345240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.345260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.358870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.359138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.359157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.373076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.373348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.373368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.386530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.386809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.386828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.400878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.401144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.401163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.414400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.414622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.414640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.326 [2024-07-25 17:08:47.427965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.326 [2024-07-25 17:08:47.428242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-07-25 17:08:47.428262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.441418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.441631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.441649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.455305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.455581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.455600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.467997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.468293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.468316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.481373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.481660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.494664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.494932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.494951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.507175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.507450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.507468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.519777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.519947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.519964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.532963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.533236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.533256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.546583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.546741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.546760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.560338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.560604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.560624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.573033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.573305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.585565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.585832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.585852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.327 [2024-07-25 17:08:47.597814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.327 [2024-07-25 17:08:47.598087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-07-25 17:08:47.598107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.589 [2024-07-25 17:08:47.610654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.589 [2024-07-25 17:08:47.610923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.589 [2024-07-25 17:08:47.610945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.589 [2024-07-25 17:08:47.624038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.589 [2024-07-25 17:08:47.624317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.589 [2024-07-25 17:08:47.624336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.589 [2024-07-25 17:08:47.637134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.589 [2024-07-25 17:08:47.637403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.589 [2024-07-25 17:08:47.637422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.589 [2024-07-25 17:08:47.650968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.589 [2024-07-25 17:08:47.651241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.651261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.663831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.664100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.677189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.677461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.677478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.690229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.690496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.690515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.703582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.703850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.703869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.717127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.717400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.717420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.731229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.731503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.731522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.745122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.745388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.745407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.758268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.758526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.758546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.771937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.772216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.772236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.785110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.785380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.785399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.798542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.798812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.798831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.812006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.812278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.812297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.825945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.826217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.826236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.838901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.839127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.839149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.590 [2024-07-25 17:08:47.851836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc5c400) with pdu=0x2000190fef90 00:29:27.590 [2024-07-25 17:08:47.852004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.590 [2024-07-25 17:08:47.852021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.590 00:29:27.590 Latency(us) 00:29:27.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.590 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:27.590 nvme0n1 : 2.01 2287.92 285.99 0.00 0.00 6978.11 5816.32 15291.73 00:29:27.590 =================================================================================================================== 00:29:27.590 Total : 2287.92 285.99 0.00 0.00 6978.11 5816.32 15291.73 00:29:27.590 0 00:29:27.852 17:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:27.852 17:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:27.852 17:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:27.852 | .driver_specific 00:29:27.852 | .nvme_error 00:29:27.852 | .status_code 00:29:27.852 | .command_transient_transport_error' 00:29:27.852 17:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1605051 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1605051 ']' 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1605051 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1605051 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1605051' 00:29:27.852 killing process with pid 1605051 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1605051 00:29:27.852 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.852 00:29:27.852 Latency(us) 00:29:27.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.852 =================================================================================================================== 00:29:27.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.852 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1605051 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1602651 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1602651 ']' 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1602651 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602651 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602651' 00:29:28.114 killing process with pid 1602651 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1602651 00:29:28.114 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1602651 00:29:28.377 00:29:28.377 real 0m16.378s 00:29:28.377 user 0m32.434s 00:29:28.377 sys 0m3.025s 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.377 ************************************ 00:29:28.377 END TEST nvmf_digest_error 00:29:28.377 ************************************ 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.377 rmmod nvme_tcp 00:29:28.377 rmmod nvme_fabrics 00:29:28.377 rmmod nvme_keyring 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1602651 ']' 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1602651 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1602651 ']' 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1602651 00:29:28.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1602651) - No such process 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1602651 is not found' 00:29:28.377 Process with pid 1602651 is not found 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.377 17:08:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:30.928 00:29:30.928 real 0m40.701s 00:29:30.928 user 1m2.965s 00:29:30.928 sys 0m11.616s 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:30.928 ************************************ 00:29:30.928 END TEST nvmf_digest 00:29:30.928 ************************************ 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:30.928 17:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 ************************************ 00:29:30.929 START TEST nvmf_bdevperf 00:29:30.929 ************************************ 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:30.929 * Looking for test storage... 00:29:30.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:30.929 17:08:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:37.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:37.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:37.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:37.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.525 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:37.526 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.526 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.526 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:37.526 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:37.526 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.526 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.787 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.787 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.787 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:37.787 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.787 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.787 17:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:37.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:29:37.787 00:29:37.787 --- 10.0.0.2 ping statistics --- 00:29:37.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.787 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:29:37.787 00:29:37.787 --- 10.0.0.1 ping statistics --- 00:29:37.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.787 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:37.787 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1610055 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1610055 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1610055 ']' 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.049 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.049 [2024-07-25 17:08:58.143371] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:38.049 [2024-07-25 17:08:58.143439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.049 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.049 [2024-07-25 17:08:58.230750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:38.311 [2024-07-25 17:08:58.327369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.311 [2024-07-25 17:08:58.327427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.311 [2024-07-25 17:08:58.327436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.311 [2024-07-25 17:08:58.327443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.311 [2024-07-25 17:08:58.327449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.311 [2024-07-25 17:08:58.327603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.311 [2024-07-25 17:08:58.327771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.311 [2024-07-25 17:08:58.327771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.884 [2024-07-25 17:08:58.964489] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.884 17:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.884 Malloc0 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.884 [2024-07-25 17:08:59.034261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:38.884 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.885 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.885 { 00:29:38.885 "params": { 00:29:38.885 "name": "Nvme$subsystem", 00:29:38.885 "trtype": "$TEST_TRANSPORT", 00:29:38.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.885 "adrfam": "ipv4", 00:29:38.885 "trsvcid": "$NVMF_PORT", 00:29:38.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.885 "hdgst": ${hdgst:-false}, 00:29:38.885 "ddgst": ${ddgst:-false} 00:29:38.885 }, 00:29:38.885 "method": "bdev_nvme_attach_controller" 00:29:38.885 } 00:29:38.885 EOF 00:29:38.885 )") 00:29:38.885 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:38.885 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:38.885 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:38.885 17:08:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:38.885 "params": { 00:29:38.885 "name": "Nvme1", 00:29:38.885 "trtype": "tcp", 00:29:38.885 "traddr": "10.0.0.2", 00:29:38.885 "adrfam": "ipv4", 00:29:38.885 "trsvcid": "4420", 00:29:38.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.885 "hdgst": false, 00:29:38.885 "ddgst": false 00:29:38.885 }, 00:29:38.885 "method": "bdev_nvme_attach_controller" 00:29:38.885 }' 00:29:38.885 [2024-07-25 17:08:59.088044] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:38.885 [2024-07-25 17:08:59.088096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610151 ] 00:29:38.885 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.885 [2024-07-25 17:08:59.145585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.146 [2024-07-25 17:08:59.210600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.407 Running I/O for 1 seconds... 00:29:40.351 00:29:40.351 Latency(us) 00:29:40.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.351 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.351 Verification LBA range: start 0x0 length 0x4000 00:29:40.351 Nvme1n1 : 1.01 10648.07 41.59 0.00 0.00 11955.88 1966.08 11960.32 00:29:40.351 =================================================================================================================== 00:29:40.351 Total : 10648.07 41.59 0.00 0.00 11955.88 1966.08 11960.32 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1610468 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.351 { 00:29:40.351 "params": { 00:29:40.351 "name": "Nvme$subsystem", 00:29:40.351 "trtype": "$TEST_TRANSPORT", 00:29:40.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.351 "adrfam": "ipv4", 00:29:40.351 "trsvcid": "$NVMF_PORT", 00:29:40.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.351 "hdgst": ${hdgst:-false}, 00:29:40.351 "ddgst": ${ddgst:-false} 00:29:40.351 }, 00:29:40.351 "method": "bdev_nvme_attach_controller" 00:29:40.351 } 00:29:40.351 EOF 00:29:40.351 )") 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:40.351 17:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:40.351 "params": { 00:29:40.351 "name": "Nvme1", 00:29:40.351 "trtype": "tcp", 00:29:40.351 "traddr": "10.0.0.2", 00:29:40.351 "adrfam": "ipv4", 00:29:40.351 "trsvcid": "4420", 00:29:40.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:40.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:40.351 "hdgst": false, 00:29:40.351 "ddgst": false 00:29:40.351 }, 00:29:40.351 "method": "bdev_nvme_attach_controller" 00:29:40.351 }' 00:29:40.613 [2024-07-25 17:09:00.636246] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:40.613 [2024-07-25 17:09:00.636306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610468 ] 00:29:40.613 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.613 [2024-07-25 17:09:00.695445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.613 [2024-07-25 17:09:00.759418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.875 Running I/O for 15 seconds... 00:29:43.426 17:09:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1610055 00:29:43.426 17:09:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:43.426 [2024-07-25 17:09:03.601011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.426 [2024-07-25 17:09:03.601241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.426 [2024-07-25 17:09:03.601261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.426 [2024-07-25 17:09:03.601280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.426 [2024-07-25 17:09:03.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.426 [2024-07-25 17:09:03.601322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.426 [2024-07-25 17:09:03.601343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.426 [2024-07-25 17:09:03.601546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.426 [2024-07-25 17:09:03.601553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.601569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.601585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.601976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.601985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.427 [2024-07-25 17:09:03.601992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.427 [2024-07-25 17:09:03.602172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.427 [2024-07-25 17:09:03.602181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.428 [2024-07-25 17:09:03.602782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.428 [2024-07-25 17:09:03.602791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.602966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.602982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.602991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.602998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.603014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.603046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.603062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.429 [2024-07-25 17:09:03.603079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.429 [2024-07-25 17:09:03.603193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122570 is same with the state(5) to be set 00:29:43.429 [2024-07-25 17:09:03.603290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.429 [2024-07-25 17:09:03.603296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.429 [2024-07-25 17:09:03.603303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119128 len:8 PRP1 0x0 PRP2 0x0 00:29:43.429 [2024-07-25 17:09:03.603310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.429 [2024-07-25 17:09:03.603348] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2122570 was disconnected and freed. reset controller. 00:29:43.429 [2024-07-25 17:09:03.606942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.429 [2024-07-25 17:09:03.606989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.429 [2024-07-25 17:09:03.607903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.429 [2024-07-25 17:09:03.607919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.429 [2024-07-25 17:09:03.607927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.429 [2024-07-25 17:09:03.608146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.429 [2024-07-25 17:09:03.608374] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.429 [2024-07-25 17:09:03.608383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.429 [2024-07-25 17:09:03.608391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.429 [2024-07-25 17:09:03.611896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.429 [2024-07-25 17:09:03.620982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.429 [2024-07-25 17:09:03.621664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.429 [2024-07-25 17:09:03.621702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.429 [2024-07-25 17:09:03.621713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.429 [2024-07-25 17:09:03.621951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.429 [2024-07-25 17:09:03.622171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.429 [2024-07-25 17:09:03.622180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.429 [2024-07-25 17:09:03.622188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.429 [2024-07-25 17:09:03.625706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.429 [2024-07-25 17:09:03.634786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.429 [2024-07-25 17:09:03.635602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.429 [2024-07-25 17:09:03.635639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.429 [2024-07-25 17:09:03.635649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.429 [2024-07-25 17:09:03.635887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.429 [2024-07-25 17:09:03.636107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.430 [2024-07-25 17:09:03.636116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.430 [2024-07-25 17:09:03.636124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.430 [2024-07-25 17:09:03.639633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.430 [2024-07-25 17:09:03.648729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.430 [2024-07-25 17:09:03.649551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.430 [2024-07-25 17:09:03.649588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.430 [2024-07-25 17:09:03.649599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.430 [2024-07-25 17:09:03.649836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.430 [2024-07-25 17:09:03.650056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.430 [2024-07-25 17:09:03.650065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.430 [2024-07-25 17:09:03.650072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.430 [2024-07-25 17:09:03.653588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.430 [2024-07-25 17:09:03.662665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.430 [2024-07-25 17:09:03.663480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.430 [2024-07-25 17:09:03.663517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.430 [2024-07-25 17:09:03.663527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.430 [2024-07-25 17:09:03.663764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.430 [2024-07-25 17:09:03.663984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.430 [2024-07-25 17:09:03.663992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.430 [2024-07-25 17:09:03.664001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.430 [2024-07-25 17:09:03.667509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.430 [2024-07-25 17:09:03.676587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.430 [2024-07-25 17:09:03.677468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.430 [2024-07-25 17:09:03.677505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.430 [2024-07-25 17:09:03.677516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.430 [2024-07-25 17:09:03.677753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.430 [2024-07-25 17:09:03.677973] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.430 [2024-07-25 17:09:03.677982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.430 [2024-07-25 17:09:03.677989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.430 [2024-07-25 17:09:03.681499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.430 [2024-07-25 17:09:03.690371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.430 [2024-07-25 17:09:03.691129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.430 [2024-07-25 17:09:03.691166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.430 [2024-07-25 17:09:03.691177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.430 [2024-07-25 17:09:03.691422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.430 [2024-07-25 17:09:03.691643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.430 [2024-07-25 17:09:03.691652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.430 [2024-07-25 17:09:03.691660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.430 [2024-07-25 17:09:03.695159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.696 [2024-07-25 17:09:03.704242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.696 [2024-07-25 17:09:03.704980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.696 [2024-07-25 17:09:03.705017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.696 [2024-07-25 17:09:03.705032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.696 [2024-07-25 17:09:03.705278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.696 [2024-07-25 17:09:03.705500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.696 [2024-07-25 17:09:03.705508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.696 [2024-07-25 17:09:03.705516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.696 [2024-07-25 17:09:03.709017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.696 [2024-07-25 17:09:03.718094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.696 [2024-07-25 17:09:03.718916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.696 [2024-07-25 17:09:03.718953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.696 [2024-07-25 17:09:03.718964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.696 [2024-07-25 17:09:03.719213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.696 [2024-07-25 17:09:03.719433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.696 [2024-07-25 17:09:03.719442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.696 [2024-07-25 17:09:03.719450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.696 [2024-07-25 17:09:03.722950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.696 [2024-07-25 17:09:03.732028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.696 [2024-07-25 17:09:03.732806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.696 [2024-07-25 17:09:03.732843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.696 [2024-07-25 17:09:03.732854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.696 [2024-07-25 17:09:03.733090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.696 [2024-07-25 17:09:03.733318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.696 [2024-07-25 17:09:03.733327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.696 [2024-07-25 17:09:03.733335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.696 [2024-07-25 17:09:03.736838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.696 [2024-07-25 17:09:03.745927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.696 [2024-07-25 17:09:03.746699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.696 [2024-07-25 17:09:03.746737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.696 [2024-07-25 17:09:03.746748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.696 [2024-07-25 17:09:03.746984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.696 [2024-07-25 17:09:03.747214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.696 [2024-07-25 17:09:03.747228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.696 [2024-07-25 17:09:03.747236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.696 [2024-07-25 17:09:03.750748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.696 [2024-07-25 17:09:03.759821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.696 [2024-07-25 17:09:03.760643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.696 [2024-07-25 17:09:03.760681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.696 [2024-07-25 17:09:03.760691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.696 [2024-07-25 17:09:03.760928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.696 [2024-07-25 17:09:03.761148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.696 [2024-07-25 17:09:03.761157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.696 [2024-07-25 17:09:03.761164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.696 [2024-07-25 17:09:03.764675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.696 [2024-07-25 17:09:03.773752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.696 [2024-07-25 17:09:03.774538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.696 [2024-07-25 17:09:03.774576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.696 [2024-07-25 17:09:03.774586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.696 [2024-07-25 17:09:03.774823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.775043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.775051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.775059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.778570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.787647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.788353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.788390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.788401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.788637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.788865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.788873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.788881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.792392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.801478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.802300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.802338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.802350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.802590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.802809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.802818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.802826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.806336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.815413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.816218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.816256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.816266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.816503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.816723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.816732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.816739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.820252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.829207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.829970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.830007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.830018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.830264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.830485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.830494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.830501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.834002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.843087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.843915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.843952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.843967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.844213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.844433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.844442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.844450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.847952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.856829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.857622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.857659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.857670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.857906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.858126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.858135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.858143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.861655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.870736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.871533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.871570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.871581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.871817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.872038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.872046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.872054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.875561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.884635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.885301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.885338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.885350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.885588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.885808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.885821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.885829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.889338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.898418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.899001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.899020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.899027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.899252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.899469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.899477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.899484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.902977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.912260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.913062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.913098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.913109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.913356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.913576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.913585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.913593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.917097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.926185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.927004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.927041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.927051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.927301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.927522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.927530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.927538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.931040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.940120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.940859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.940877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.940885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.941101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.941332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.941341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.941348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.944844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.953950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.697 [2024-07-25 17:09:03.954716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.697 [2024-07-25 17:09:03.954753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:43.697 [2024-07-25 17:09:03.954763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:43.697 [2024-07-25 17:09:03.955000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:43.697 [2024-07-25 17:09:03.955229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.697 [2024-07-25 17:09:03.955238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.697 [2024-07-25 17:09:03.955246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.697 [2024-07-25 17:09:03.958749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.697 [2024-07-25 17:09:03.968052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.010 [2024-07-25 17:09:03.968873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.010 [2024-07-25 17:09:03.968911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.010 [2024-07-25 17:09:03.968922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.010 [2024-07-25 17:09:03.969158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.010 [2024-07-25 17:09:03.969388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.010 [2024-07-25 17:09:03.969398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.010 [2024-07-25 17:09:03.969406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.010 [2024-07-25 17:09:03.972909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.010 [2024-07-25 17:09:03.982002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.010 [2024-07-25 17:09:03.982826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.010 [2024-07-25 17:09:03.982862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.010 [2024-07-25 17:09:03.982873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.010 [2024-07-25 17:09:03.983114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.010 [2024-07-25 17:09:03.983346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.010 [2024-07-25 17:09:03.983355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.010 [2024-07-25 17:09:03.983363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.010 [2024-07-25 17:09:03.986869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.010 [2024-07-25 17:09:03.995756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.010 [2024-07-25 17:09:03.996449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.010 [2024-07-25 17:09:03.996468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.010 [2024-07-25 17:09:03.996476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.010 [2024-07-25 17:09:03.996693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.010 [2024-07-25 17:09:03.996909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.010 [2024-07-25 17:09:03.996917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.010 [2024-07-25 17:09:03.996924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.010 [2024-07-25 17:09:04.000426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.010 [2024-07-25 17:09:04.009498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.010 [2024-07-25 17:09:04.010174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.010 [2024-07-25 17:09:04.010190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.010 [2024-07-25 17:09:04.010198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.010 [2024-07-25 17:09:04.010420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.010 [2024-07-25 17:09:04.010636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.010 [2024-07-25 17:09:04.010644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.010 [2024-07-25 17:09:04.010651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.010 [2024-07-25 17:09:04.014143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.010 [2024-07-25 17:09:04.023423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.010 [2024-07-25 17:09:04.024182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.010 [2024-07-25 17:09:04.024226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.010 [2024-07-25 17:09:04.024238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.010 [2024-07-25 17:09:04.024474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.010 [2024-07-25 17:09:04.024695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.010 [2024-07-25 17:09:04.024703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.010 [2024-07-25 17:09:04.024715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.010 [2024-07-25 17:09:04.028223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.010 [2024-07-25 17:09:04.037305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.010 [2024-07-25 17:09:04.038065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.010 [2024-07-25 17:09:04.038101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.010 [2024-07-25 17:09:04.038112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.038358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.038579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.038588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.038595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.042108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.051194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.051936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.051954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.051962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.052178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.052403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.052411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.052418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.055914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.064995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.065771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.065809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.065821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.066058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.066286] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.066295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.066303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.069812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.078901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.079684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.079726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.079736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.079973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.080194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.080211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.080219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.083719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.092797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.093508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.093546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.093556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.093793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.094013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.094022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.094029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.097537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.106615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.107425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.107462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.107473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.107709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.107929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.107939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.107946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.111455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.120533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.121306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.121344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.121357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.121595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.121823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.121832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.121839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.125350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.134434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.135234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.135271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.135282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.135518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.135739] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.135748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.135756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.139268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.148358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.149105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.149123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.149131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.149353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.149570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.149578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.149585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.011 [2024-07-25 17:09:04.153087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.011 [2024-07-25 17:09:04.162181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.011 [2024-07-25 17:09:04.162947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.011 [2024-07-25 17:09:04.162984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.011 [2024-07-25 17:09:04.162996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.011 [2024-07-25 17:09:04.163244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.011 [2024-07-25 17:09:04.163464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.011 [2024-07-25 17:09:04.163474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.011 [2024-07-25 17:09:04.163481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.167005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.176103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.176929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.176967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.176978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.177223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.177444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.177453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.177460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.180970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.189856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.190629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.190667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.190678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.190914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.191136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.191145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.191152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.194664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.203746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.204539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.204577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.204587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.204824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.205045] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.205053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.205061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.208571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.217651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.218213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.218232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.218244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.218461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.218677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.218684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.218691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.222185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.231478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.232043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.232058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.232065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.232288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.232505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.232513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.232519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.236019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.245334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.246055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.246070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.246077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.246299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.246515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.246523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.246530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.250026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.259119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.259840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.259856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.259863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.260079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.260302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.260315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.260322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.263823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.012 [2024-07-25 17:09:04.272898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.012 [2024-07-25 17:09:04.273712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.012 [2024-07-25 17:09:04.273749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.012 [2024-07-25 17:09:04.273760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.012 [2024-07-25 17:09:04.273996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.012 [2024-07-25 17:09:04.274226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.012 [2024-07-25 17:09:04.274236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.012 [2024-07-25 17:09:04.274244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.012 [2024-07-25 17:09:04.277753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.286842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.287558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.287595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.287606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.287843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.288063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.288072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.288079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.291598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.300691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.301505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.301543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.301555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.301795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.302015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.302024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.302032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.305550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.314441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.315169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.315188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.315196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.315419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.315637] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.315644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.315651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.319152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.328239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.328802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.328818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.328826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.329041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.329263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.329271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.329278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.332777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.342071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.342751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.342767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.342775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.342990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.343214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.343222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.343229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.346726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.355817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.356582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.356620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.356630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.356871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.357091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.357100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.357108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.360624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.369721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.370500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.370538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.370549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.370785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.371005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.371014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.371021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.374528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.383622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.384438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.384475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.384487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.384724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.384944] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.384953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.384961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.388482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.397367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.398102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.398120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.398128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.398352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.398569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.398577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.398589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.402119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.411212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.411886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.411902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.411909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.412125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.412348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.412357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.412364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.415862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.425146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.425864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.425880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.425887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.426103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.426325] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.426333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.426340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.429839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.438924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.439642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.439658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.439665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.439881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.440097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.440105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.440112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.443628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.452721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.453524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.453561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.453572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.453808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.454028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.454037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.454044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.457550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.466699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.467518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.467556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.467566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.467802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.468022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.468031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.468039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.471547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.480630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.481452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.481489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.481499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.481736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.481955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.481964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.481972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.485489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.494580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.495314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.495334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.495342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.495563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.495780] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.495788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.495794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.499299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.508384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.509074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.509090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.509097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.509318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.509535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.509542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.509549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.513049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.522130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.522853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.522869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.275 [2024-07-25 17:09:04.522876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.275 [2024-07-25 17:09:04.523092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.275 [2024-07-25 17:09:04.523313] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.275 [2024-07-25 17:09:04.523321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.275 [2024-07-25 17:09:04.523328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.275 [2024-07-25 17:09:04.526824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.275 [2024-07-25 17:09:04.535909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.275 [2024-07-25 17:09:04.536613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.275 [2024-07-25 17:09:04.536629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.276 [2024-07-25 17:09:04.536636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.276 [2024-07-25 17:09:04.536852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.276 [2024-07-25 17:09:04.537068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.276 [2024-07-25 17:09:04.537076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.276 [2024-07-25 17:09:04.537086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.276 [2024-07-25 17:09:04.540590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.538 [2024-07-25 17:09:04.549687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.538 [2024-07-25 17:09:04.550360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.538 [2024-07-25 17:09:04.550376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.538 [2024-07-25 17:09:04.550383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.538 [2024-07-25 17:09:04.550599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.538 [2024-07-25 17:09:04.550814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.538 [2024-07-25 17:09:04.550822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.538 [2024-07-25 17:09:04.550829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.538 [2024-07-25 17:09:04.554344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.538 [2024-07-25 17:09:04.563428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.538 [2024-07-25 17:09:04.564142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.538 [2024-07-25 17:09:04.564156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.538 [2024-07-25 17:09:04.564164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.538 [2024-07-25 17:09:04.564385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.538 [2024-07-25 17:09:04.564601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.538 [2024-07-25 17:09:04.564608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.538 [2024-07-25 17:09:04.564615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.538 [2024-07-25 17:09:04.568111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.538 [2024-07-25 17:09:04.577195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.538 [2024-07-25 17:09:04.578009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.538 [2024-07-25 17:09:04.578047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.538 [2024-07-25 17:09:04.578058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.538 [2024-07-25 17:09:04.578304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.538 [2024-07-25 17:09:04.578524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.538 [2024-07-25 17:09:04.578534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.538 [2024-07-25 17:09:04.578541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.538 [2024-07-25 17:09:04.582045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.538 [2024-07-25 17:09:04.591134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.538 [2024-07-25 17:09:04.591836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.538 [2024-07-25 17:09:04.591860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.538 [2024-07-25 17:09:04.591869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.538 [2024-07-25 17:09:04.592087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.538 [2024-07-25 17:09:04.592311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.538 [2024-07-25 17:09:04.592319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.538 [2024-07-25 17:09:04.592326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.538 [2024-07-25 17:09:04.595828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.538 [2024-07-25 17:09:04.604915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.538 [2024-07-25 17:09:04.605688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.538 [2024-07-25 17:09:04.605727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.538 [2024-07-25 17:09:04.605737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.538 [2024-07-25 17:09:04.605974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.538 [2024-07-25 17:09:04.606195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.538 [2024-07-25 17:09:04.606211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.538 [2024-07-25 17:09:04.606219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.538 [2024-07-25 17:09:04.609730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.538 [2024-07-25 17:09:04.618825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.538 [2024-07-25 17:09:04.619625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.619663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.619674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.619910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.620129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.620139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.620147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.623665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.632760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.633523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.633561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.633571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.633809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.634033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.634042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.634049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.637566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.646744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.647429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.647466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.647478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.647716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.647936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.647946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.647953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.651469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.660576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.661320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.661358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.661368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.661605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.661825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.661834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.661842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.665351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.674431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.675169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.675187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.675195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.675418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.675635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.675643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.675649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.679151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.688239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.689004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.689041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.689052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.689297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.689518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.689527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.689534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.693038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.702127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.702824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.702862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.702872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.703109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.703337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.703347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.703354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.706860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.715947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.716681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.716700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.716708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.716925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.717141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.717149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.717156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.720662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.729746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.730431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.730468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.730484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.730725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.730945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.730954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.730961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.734478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.743578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.744373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.744411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.744423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.744660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.744881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.744890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.744897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.748416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.757515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.758214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.758233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.758241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.758458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.758675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.758683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.758690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.762192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.771281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.771958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.771973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.771981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.772197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.772419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.772432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.772439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.775938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.785055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.785664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.785680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.785687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.785904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.786119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.786127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.786134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.789638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.539 [2024-07-25 17:09:04.798924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.539 [2024-07-25 17:09:04.799675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.539 [2024-07-25 17:09:04.799712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.539 [2024-07-25 17:09:04.799722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.539 [2024-07-25 17:09:04.799959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.539 [2024-07-25 17:09:04.800179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.539 [2024-07-25 17:09:04.800188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.539 [2024-07-25 17:09:04.800195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.539 [2024-07-25 17:09:04.803702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.801 [2024-07-25 17:09:04.812794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.801 [2024-07-25 17:09:04.813567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.801 [2024-07-25 17:09:04.813604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.801 [2024-07-25 17:09:04.813614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.801 [2024-07-25 17:09:04.813851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.801 [2024-07-25 17:09:04.814071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.801 [2024-07-25 17:09:04.814080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.801 [2024-07-25 17:09:04.814088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.801 [2024-07-25 17:09:04.817600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.801 [2024-07-25 17:09:04.826725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.801 [2024-07-25 17:09:04.827532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.801 [2024-07-25 17:09:04.827569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.801 [2024-07-25 17:09:04.827579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.801 [2024-07-25 17:09:04.827815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.801 [2024-07-25 17:09:04.828035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.801 [2024-07-25 17:09:04.828044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.801 [2024-07-25 17:09:04.828052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.801 [2024-07-25 17:09:04.831559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.801 [2024-07-25 17:09:04.840630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.801 [2024-07-25 17:09:04.841411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.801 [2024-07-25 17:09:04.841448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.801 [2024-07-25 17:09:04.841459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.801 [2024-07-25 17:09:04.841695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.801 [2024-07-25 17:09:04.841915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.801 [2024-07-25 17:09:04.841924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.801 [2024-07-25 17:09:04.841931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.801 [2024-07-25 17:09:04.845448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.801 [2024-07-25 17:09:04.854526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.801 [2024-07-25 17:09:04.855302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.801 [2024-07-25 17:09:04.855339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.801 [2024-07-25 17:09:04.855352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.801 [2024-07-25 17:09:04.855589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.801 [2024-07-25 17:09:04.855809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.801 [2024-07-25 17:09:04.855818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.801 [2024-07-25 17:09:04.855825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.801 [2024-07-25 17:09:04.859333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.801 [2024-07-25 17:09:04.868408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.801 [2024-07-25 17:09:04.869227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.801 [2024-07-25 17:09:04.869264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.801 [2024-07-25 17:09:04.869276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.801 [2024-07-25 17:09:04.869519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.801 [2024-07-25 17:09:04.869739] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.801 [2024-07-25 17:09:04.869749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.801 [2024-07-25 17:09:04.869756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.801 [2024-07-25 17:09:04.873266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.801 [2024-07-25 17:09:04.882342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.801 [2024-07-25 17:09:04.883101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.801 [2024-07-25 17:09:04.883138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.801 [2024-07-25 17:09:04.883149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.801 [2024-07-25 17:09:04.883393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.801 [2024-07-25 17:09:04.883614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.801 [2024-07-25 17:09:04.883623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.801 [2024-07-25 17:09:04.883631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.887129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.896197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.896990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.897026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.897037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.897284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.897505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.897513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.897521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.901020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.910090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.910871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.910908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.910919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.911155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.911384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.911393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.911405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.914906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.923979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.924723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.924761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.924771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.925008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.925237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.925246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.925253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.928755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.937828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.938501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.938520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.938528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.938745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.938961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.938969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.938976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.942475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.951761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.952250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.952272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.952280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.952499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.952715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.952723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.952730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.956236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.965713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.966526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.966565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.966576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.966812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.967032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.967042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.967049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.970558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.979638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.980467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.980505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.980515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.980751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.980972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.980980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.980988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.984497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:04.993574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:04.994305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:04.994343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:04.994354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:04.994594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:04.994814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:04.994822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:04.994830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:04.998338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:05.007413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:05.008017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:05.008054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:05.008065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:05.008311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:05.008536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:05.008545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:05.008552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:05.012054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:05.021333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:05.022150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:05.022187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:05.022199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:05.022447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:05.022667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:05.022676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:05.022683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:05.026185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:05.035272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:05.036026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:05.036064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:05.036074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:05.036319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:05.036540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:05.036549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:05.036556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:05.040055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:05.049136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:05.049956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:05.049993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:05.050004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:05.050249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:05.050470] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:05.050478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:05.050486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:05.054004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.802 [2024-07-25 17:09:05.062876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.802 [2024-07-25 17:09:05.063592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.802 [2024-07-25 17:09:05.063630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:44.802 [2024-07-25 17:09:05.063641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:44.802 [2024-07-25 17:09:05.063877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:44.802 [2024-07-25 17:09:05.064098] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.802 [2024-07-25 17:09:05.064106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.802 [2024-07-25 17:09:05.064114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.802 [2024-07-25 17:09:05.067622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.076699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.077510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.077547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.077558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.077794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.078014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.078023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.078030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.081541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.090615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.091305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.091342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.091352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.091589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.091809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.091818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.091826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.095338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.104410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.104880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.104908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.104916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.105137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.105361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.105369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.105376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.108873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.118141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.118953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.118990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.119001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.119246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.119467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.119475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.119483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.122983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.132053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.132810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.132848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.132858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.133095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.133322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.133332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.133340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.136840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.145920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.146696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.146733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.146743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.146980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.147213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.147222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.147230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.150730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.159812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.160617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.160655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.160665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.160902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.161121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.161130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.161137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.164647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.173723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.174498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.174536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.174546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.174783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.175002] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.175011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.175018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.178535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.187608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.188426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.188464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.188474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.188711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.188931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.188939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.188947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.192455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.201533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.202300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.202337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.202347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.202583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.202804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.202812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.202820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.206330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.215401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.216213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.216251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.216262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.216502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.216722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.216730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.216737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.220244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.229341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.230143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.230180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.230191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.230436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.230657] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.230666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.230673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.234178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.243277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.244057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.244094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.244108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.244362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.244583] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.244592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.244600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.248100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.257185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.257957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.257994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.258004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.258248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.258469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.258478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.258486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.261986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.067 [2024-07-25 17:09:05.271060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.067 [2024-07-25 17:09:05.271885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.067 [2024-07-25 17:09:05.271922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.067 [2024-07-25 17:09:05.271933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.067 [2024-07-25 17:09:05.272169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.067 [2024-07-25 17:09:05.272398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.067 [2024-07-25 17:09:05.272408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.067 [2024-07-25 17:09:05.272416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.067 [2024-07-25 17:09:05.275916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.068 [2024-07-25 17:09:05.284998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.068 [2024-07-25 17:09:05.285779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.068 [2024-07-25 17:09:05.285816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.068 [2024-07-25 17:09:05.285827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.068 [2024-07-25 17:09:05.286063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.068 [2024-07-25 17:09:05.286290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.068 [2024-07-25 17:09:05.286304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.068 [2024-07-25 17:09:05.286311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.068 [2024-07-25 17:09:05.289813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.068 [2024-07-25 17:09:05.298887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.068 [2024-07-25 17:09:05.299623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.068 [2024-07-25 17:09:05.299642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.068 [2024-07-25 17:09:05.299650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.068 [2024-07-25 17:09:05.299867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.068 [2024-07-25 17:09:05.300083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.068 [2024-07-25 17:09:05.300091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.068 [2024-07-25 17:09:05.300098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.068 [2024-07-25 17:09:05.303598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.068 [2024-07-25 17:09:05.312673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.068 [2024-07-25 17:09:05.313483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.068 [2024-07-25 17:09:05.313521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.068 [2024-07-25 17:09:05.313531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.068 [2024-07-25 17:09:05.313768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.068 [2024-07-25 17:09:05.313988] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.068 [2024-07-25 17:09:05.313997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.068 [2024-07-25 17:09:05.314004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.068 [2024-07-25 17:09:05.317514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.068 [2024-07-25 17:09:05.326589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.068 [2024-07-25 17:09:05.327316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.068 [2024-07-25 17:09:05.327354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.068 [2024-07-25 17:09:05.327367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.068 [2024-07-25 17:09:05.327603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.068 [2024-07-25 17:09:05.327823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.068 [2024-07-25 17:09:05.327832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.068 [2024-07-25 17:09:05.327840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.068 [2024-07-25 17:09:05.331349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.340427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.341121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.341140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.341147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.341369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.341586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.341594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.341601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.345106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.354184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.354937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.354974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.354985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.355230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.355451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.355459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.355467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.358969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.368048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.368828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.368866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.368876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.369113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.369343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.369353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.369360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.372863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.381943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.382743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.382780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.382791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.383032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.383263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.383273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.383281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.386785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.395859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.396643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.396681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.396691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.396927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.397148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.397156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.397164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.400676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.409753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.410555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.410592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.410603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.410839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.411059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.411068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.411075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.414583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.423659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.424477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.424515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.424525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.424762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.424982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.424991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.425003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.428510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.437587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.438443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.438481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.438492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.438729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.438949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.438958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.438965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.442478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.451360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.452136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.452173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.452185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.452432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.452653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.452662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.452670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.456179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.465259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.466021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.466059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.466069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.466314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.466535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.466544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.466552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.470052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.479129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.479956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.479993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.480003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.480248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.480469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.480478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.480485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.483985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.493065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.493758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.493796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.493806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.494043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.494271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.494280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.494288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.497789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.506868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.507571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.507608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.507619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.507855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.508075] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.508083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.508091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.511601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.520680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.521498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.521535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.521545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.521781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.522009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.522018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.522025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.525536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.534614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.535414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.535451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.535462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.535699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.535918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.535927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.535935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.539446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.548531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.549300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.549337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.549349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.549587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.549807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.549816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.549824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.553335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.562418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.563219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.563256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.563266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.563503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.563723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.563732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.563740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.567258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.576334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.577111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.577148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.577160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.577409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.577630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.577639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.577647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.581149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.331 [2024-07-25 17:09:05.590227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.331 [2024-07-25 17:09:05.591048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.331 [2024-07-25 17:09:05.591085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.331 [2024-07-25 17:09:05.591095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.331 [2024-07-25 17:09:05.591344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.331 [2024-07-25 17:09:05.591565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.331 [2024-07-25 17:09:05.591573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.331 [2024-07-25 17:09:05.591581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.331 [2024-07-25 17:09:05.595082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.593 [2024-07-25 17:09:05.604163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.593 [2024-07-25 17:09:05.604966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.593 [2024-07-25 17:09:05.605004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.593 [2024-07-25 17:09:05.605014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.593 [2024-07-25 17:09:05.605260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.593 [2024-07-25 17:09:05.605480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.605489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.605497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.608998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.618075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.618895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.618933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.618948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.619184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.619413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.619423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.619430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.622932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.632006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.632759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.632796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.632806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.633042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.633270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.633279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.633287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.636788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.645873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.646642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.646680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.646691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.646927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.647147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.647156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.647164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.650678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.659760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.660578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.660615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.660626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.660862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.661082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.661096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.661103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.664616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.673692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.674483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.674521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.674531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.674768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.674988] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.674997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.675004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.678515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.687649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.688337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.688374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.688384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.688621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.688841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.688850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.688857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.692367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.701446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.702226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.702263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.702274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.702510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.702730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.702739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.702747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.706256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.715335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.716086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.716123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.716134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.716378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.716599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.716608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.716615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.720114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.729191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.729840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.729877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.729888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.594 [2024-07-25 17:09:05.730124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.594 [2024-07-25 17:09:05.730354] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.594 [2024-07-25 17:09:05.730363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.594 [2024-07-25 17:09:05.730371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.594 [2024-07-25 17:09:05.733872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.594 [2024-07-25 17:09:05.742949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.594 [2024-07-25 17:09:05.743724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.594 [2024-07-25 17:09:05.743762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.594 [2024-07-25 17:09:05.743773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.744009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.744247] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.744257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.744265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.747766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.756848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.757636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.757673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.757688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.757925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.758145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.758154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.758162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.761671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.770746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.771542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.771579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.771590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.771826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.772046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.772055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.772062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.775574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.784652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.785468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.785506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.785516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.785752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.785972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.785981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.785989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.789501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.798582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.799458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.799496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.799507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.799743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.799963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.799977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.799984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.803508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.812460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.813007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.813026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.813033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.813255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.813472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.813481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.813488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.816983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.826263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.826939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.826954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.826961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.827177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.827398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.827406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.827413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.830905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.840182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.840902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.840918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.840925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.841141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.841360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.841368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.841375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.844880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.595 [2024-07-25 17:09:05.853953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.595 [2024-07-25 17:09:05.854736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.595 [2024-07-25 17:09:05.854774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.595 [2024-07-25 17:09:05.854784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.595 [2024-07-25 17:09:05.855021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.595 [2024-07-25 17:09:05.855249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.595 [2024-07-25 17:09:05.855267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.595 [2024-07-25 17:09:05.855274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.595 [2024-07-25 17:09:05.858782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.867860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.868641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.868678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.868689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.868925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.869145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.869154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.869162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.872668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.881748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.882331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.882368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.882380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.882618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.882837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.882846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.882855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.886365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.895648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.896423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.896460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.896471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.896712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.896932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.896941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.896949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.900460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.909545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.910316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.910354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.910366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.910604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.910824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.910833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.910840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.914364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.923455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.924283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.924320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.924331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.924567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.924787] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.924796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.924803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.928314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.937395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.938169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.938214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.938227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.938464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.938684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.938692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.938704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.942211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.951302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.951991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.952010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.952017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.952240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.858 [2024-07-25 17:09:05.952457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.858 [2024-07-25 17:09:05.952466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.858 [2024-07-25 17:09:05.952473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.858 [2024-07-25 17:09:05.955978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.858 [2024-07-25 17:09:05.965195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.858 [2024-07-25 17:09:05.966022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.858 [2024-07-25 17:09:05.966059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.858 [2024-07-25 17:09:05.966069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.858 [2024-07-25 17:09:05.966314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:05.966535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:05.966544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:05.966551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:05.970054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:05.979139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:05.979963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:05.980000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:05.980011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:05.980255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:05.980476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:05.980485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:05.980493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:05.983994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:05.993072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:05.993811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:05.993835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:05.993843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:05.994060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:05.994284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:05.994293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:05.994299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:05.997796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.006869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.007510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.007547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.007558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.007795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.008015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.008024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.008032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.011547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.020624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.021136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.021155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.021163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.021386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.021602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.021611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.021618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.025115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.034398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.035068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.035084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.035091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.035312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.035534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.035542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.035549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.039044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.048336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.049136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.049173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.049185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.049431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.049652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.049661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.049668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.053169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.062261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.063050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.063087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.063097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.063342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.063562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.063571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.063579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.067082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.076162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.076941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.076979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.076990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.077234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.077454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.077464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.077471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.080979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.090062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.090827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.090865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.859 [2024-07-25 17:09:06.090875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.859 [2024-07-25 17:09:06.091112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.859 [2024-07-25 17:09:06.091345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.859 [2024-07-25 17:09:06.091356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.859 [2024-07-25 17:09:06.091363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.859 [2024-07-25 17:09:06.094865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.859 [2024-07-25 17:09:06.103946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.859 [2024-07-25 17:09:06.104567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.859 [2024-07-25 17:09:06.104586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.860 [2024-07-25 17:09:06.104594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.860 [2024-07-25 17:09:06.104811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.860 [2024-07-25 17:09:06.105027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.860 [2024-07-25 17:09:06.105035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.860 [2024-07-25 17:09:06.105042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.860 [2024-07-25 17:09:06.108543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.860 [2024-07-25 17:09:06.117825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.860 [2024-07-25 17:09:06.118598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.860 [2024-07-25 17:09:06.118635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:45.860 [2024-07-25 17:09:06.118646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:45.860 [2024-07-25 17:09:06.118882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:45.860 [2024-07-25 17:09:06.119102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.860 [2024-07-25 17:09:06.119111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.860 [2024-07-25 17:09:06.119119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.860 [2024-07-25 17:09:06.122629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.122 [2024-07-25 17:09:06.131737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.122 [2024-07-25 17:09:06.132532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.122 [2024-07-25 17:09:06.132570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.122 [2024-07-25 17:09:06.132585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.122 [2024-07-25 17:09:06.132822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.122 [2024-07-25 17:09:06.133042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.122 [2024-07-25 17:09:06.133051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.122 [2024-07-25 17:09:06.133059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.122 [2024-07-25 17:09:06.136571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.122 [2024-07-25 17:09:06.145665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.122 [2024-07-25 17:09:06.146471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.122 [2024-07-25 17:09:06.146509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.122 [2024-07-25 17:09:06.146519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.122 [2024-07-25 17:09:06.146756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.122 [2024-07-25 17:09:06.146976] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.122 [2024-07-25 17:09:06.146985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.122 [2024-07-25 17:09:06.146992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.122 [2024-07-25 17:09:06.150504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.122 [2024-07-25 17:09:06.159591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.122 [2024-07-25 17:09:06.160418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.122 [2024-07-25 17:09:06.160456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.122 [2024-07-25 17:09:06.160468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.122 [2024-07-25 17:09:06.160708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.122 [2024-07-25 17:09:06.160928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.122 [2024-07-25 17:09:06.160937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.122 [2024-07-25 17:09:06.160944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.122 [2024-07-25 17:09:06.164454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.122 [2024-07-25 17:09:06.173331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.122 [2024-07-25 17:09:06.174144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.122 [2024-07-25 17:09:06.174183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.122 [2024-07-25 17:09:06.174195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.122 [2024-07-25 17:09:06.174440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.122 [2024-07-25 17:09:06.174661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.122 [2024-07-25 17:09:06.174674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.122 [2024-07-25 17:09:06.174682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.122 [2024-07-25 17:09:06.178184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.122 [2024-07-25 17:09:06.187267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.122 [2024-07-25 17:09:06.187922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.187941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.187948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.188165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.188388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.188396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.188404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.191899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.201178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.201986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.202023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.202034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.202277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.202497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.202506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.202514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.206015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.215096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.215925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.215963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.215973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.216221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.216441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.216450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.216458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.219959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.228845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.229521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.229541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.229549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.229767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.229983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.229991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.229997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.233497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.242772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.243442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.243458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.243466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.243682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.243897] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.243905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.243911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.247420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.256701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.257420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.257435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.257442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.257665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.257881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.257889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.257896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.261395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.270467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.271182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.271197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.271210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.271430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.271646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.271653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.271660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.275154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.284231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.284940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.284955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.284962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.285178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.285398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.285406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.285413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.288909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.297984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.298721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.298758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.298769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.299006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.299233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.299243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.299251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.302754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.311834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.312628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.312665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.312675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.312912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.313133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.313142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.313154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.316664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.325749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.326542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.326580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.326592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.326830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.327051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.327059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.327067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.330577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.339658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.340446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.340484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.340494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.340731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.340951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.340959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.340967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.344482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.353561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.354385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.354422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.354433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.354670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.354890] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.354899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.354906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.358426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.367303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.367998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.368016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.368023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.368245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.368462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.368470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.368477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.371969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.123 [2024-07-25 17:09:06.381044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.123 [2024-07-25 17:09:06.381775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.123 [2024-07-25 17:09:06.381791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.123 [2024-07-25 17:09:06.381798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.123 [2024-07-25 17:09:06.382014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.123 [2024-07-25 17:09:06.382234] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.123 [2024-07-25 17:09:06.382242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.123 [2024-07-25 17:09:06.382249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.123 [2024-07-25 17:09:06.385746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.386 [2024-07-25 17:09:06.394817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.386 [2024-07-25 17:09:06.395505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.386 [2024-07-25 17:09:06.395543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.386 [2024-07-25 17:09:06.395555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.386 [2024-07-25 17:09:06.395793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.386 [2024-07-25 17:09:06.396013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.386 [2024-07-25 17:09:06.396022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.386 [2024-07-25 17:09:06.396030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.386 [2024-07-25 17:09:06.399541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.386 [2024-07-25 17:09:06.408623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.386 [2024-07-25 17:09:06.409506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.409543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.409554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.409796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.410016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.410025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.410033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.413542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.422424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.423117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.423135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.423143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.423366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.423583] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.423592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.423599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.427100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.436174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.436999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.437036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.437048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.437295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.437516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.437525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.437533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.441035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.449914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.450581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.450600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.450608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.450825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.451041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.451049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.451060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.454564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.463851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.464738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.464775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.464786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.465023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.465251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.465260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.465268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.468769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.477637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.478379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.478398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.478406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.478623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.478839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.478846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.478853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.482355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.491425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.492118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.492133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.492140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.492361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.492577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.492585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.492592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.496086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.505163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.505964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.506005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.506016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.506262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.506483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.506491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.506498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.510004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.518927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.519656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.519693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.519704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.519941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.387 [2024-07-25 17:09:06.520161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.387 [2024-07-25 17:09:06.520170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.387 [2024-07-25 17:09:06.520177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.387 [2024-07-25 17:09:06.523686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.387 [2024-07-25 17:09:06.532766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.387 [2024-07-25 17:09:06.533643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.387 [2024-07-25 17:09:06.533680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.387 [2024-07-25 17:09:06.533691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.387 [2024-07-25 17:09:06.533928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.534148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.534156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.534164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.537673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 [2024-07-25 17:09:06.546569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.547404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.547442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.547452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.547688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.547913] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.547922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.547929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.551446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 [2024-07-25 17:09:06.560334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.561021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.561039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.561047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.561270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.561487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.561494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.561501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.564999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 [2024-07-25 17:09:06.574074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.574756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.574773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.574780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.574997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.575217] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.575225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.575232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.578727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 [2024-07-25 17:09:06.588009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.588733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.588771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.588781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.589018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.589246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.589255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.589262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.592773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1610055 Killed "${NVMF_APP[@]}" "$@" 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.388 [2024-07-25 17:09:06.601853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.602643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.602681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.602692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.602928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.603148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.603157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.603165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1611872 00:29:46.388 [2024-07-25 17:09:06.606672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1611872 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1611872 ']' 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.388 17:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.388 [2024-07-25 17:09:06.615750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.616542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.616579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.616590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.616826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.617047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.617055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.617063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.620577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 [2024-07-25 17:09:06.629662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.630490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.630528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.630539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.630776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.630995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.388 [2024-07-25 17:09:06.631004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.388 [2024-07-25 17:09:06.631012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.388 [2024-07-25 17:09:06.634527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.388 [2024-07-25 17:09:06.643607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.388 [2024-07-25 17:09:06.644423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.388 [2024-07-25 17:09:06.644460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.388 [2024-07-25 17:09:06.644472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.388 [2024-07-25 17:09:06.644713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.388 [2024-07-25 17:09:06.644933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.389 [2024-07-25 17:09:06.644942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.389 [2024-07-25 17:09:06.644949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.389 [2024-07-25 17:09:06.648468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.389 [2024-07-25 17:09:06.657549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.651 [2024-07-25 17:09:06.658399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-07-25 17:09:06.658437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.651 [2024-07-25 17:09:06.658448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.651 [2024-07-25 17:09:06.658685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.651 [2024-07-25 17:09:06.658914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.651 [2024-07-25 17:09:06.658924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.651 [2024-07-25 17:09:06.658931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.651 [2024-07-25 17:09:06.659871] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:29:46.651 [2024-07-25 17:09:06.659923] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.651 [2024-07-25 17:09:06.662441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.651 [2024-07-25 17:09:06.671317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.651 [2024-07-25 17:09:06.672052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-07-25 17:09:06.672071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.651 [2024-07-25 17:09:06.672079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.651 [2024-07-25 17:09:06.672302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.651 [2024-07-25 17:09:06.672519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.651 [2024-07-25 17:09:06.672527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.651 [2024-07-25 17:09:06.672535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.651 [2024-07-25 17:09:06.676032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.651 [2024-07-25 17:09:06.685104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.651 [2024-07-25 17:09:06.685855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-07-25 17:09:06.685892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.651 [2024-07-25 17:09:06.685904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.651 [2024-07-25 17:09:06.686140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.651 [2024-07-25 17:09:06.686369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.651 [2024-07-25 17:09:06.686379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.651 [2024-07-25 17:09:06.686387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.651 [2024-07-25 17:09:06.689887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.651 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.651 [2024-07-25 17:09:06.698968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.651 [2024-07-25 17:09:06.699672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.699691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.699699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.699917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.700133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.700141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.700148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.703649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.712720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.713533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.713571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.713586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.713823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.714043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.714052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.714060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.717568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.726643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.727498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.727535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.727546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.727782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.728003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.728011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.728019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.731616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.740495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.741047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.741084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.741096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.741342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.741562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.741571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.741579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.743292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.652 [2024-07-25 17:09:06.745079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.754381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.755008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.755047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.755058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.755302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.755528] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.755537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.755545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.759044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.768132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.768883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.768902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.768910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.769128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.769350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.769360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.769367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.772864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.781945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.782557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.782595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.782606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.782844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.783064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.783072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.783080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.786590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.795878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.796579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.796598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.796607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.796824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.797040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.797048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.797056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.652 [2024-07-25 17:09:06.797196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.652 [2024-07-25 17:09:06.797223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.652 [2024-07-25 17:09:06.797230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.652 [2024-07-25 17:09:06.797235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.652 [2024-07-25 17:09:06.797240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.652 [2024-07-25 17:09:06.797415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.652 [2024-07-25 17:09:06.797634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.652 [2024-07-25 17:09:06.797634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.652 [2024-07-25 17:09:06.800565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.652 [2024-07-25 17:09:06.809646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.652 [2024-07-25 17:09:06.810471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-07-25 17:09:06.810511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.652 [2024-07-25 17:09:06.810522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.652 [2024-07-25 17:09:06.810760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.652 [2024-07-25 17:09:06.810981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.652 [2024-07-25 17:09:06.810989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.652 [2024-07-25 17:09:06.810998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.814506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.823585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.824446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.824485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.824496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.824735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.824955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.824964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.824972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.828480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.837449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.838297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.838335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.838347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.838588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.838815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.838824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.838831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.842342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.851229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.855517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.855555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.855565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.855802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.856023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.856031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.856039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.859560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.865127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.865968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.866005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.866016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.866260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.866481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.866490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.866498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.869998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.878866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.879653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.879690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.879701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.879938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.880158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.880167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.880175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.883687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.892762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.893522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.893542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.893550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.893767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.893983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.893991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.893999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.897541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.906612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.907458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.907495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.907506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.907743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.907963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.907972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.907979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.653 [2024-07-25 17:09:06.911489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.653 [2024-07-25 17:09:06.920357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.653 [2024-07-25 17:09:06.921012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-07-25 17:09:06.921031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.653 [2024-07-25 17:09:06.921038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.653 [2024-07-25 17:09:06.921260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.653 [2024-07-25 17:09:06.921477] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.653 [2024-07-25 17:09:06.921486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.653 [2024-07-25 17:09:06.921493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.915 [2024-07-25 17:09:06.925020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.915 [2024-07-25 17:09:06.934092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.915 [2024-07-25 17:09:06.934834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.915 [2024-07-25 17:09:06.934871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.915 [2024-07-25 17:09:06.934891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.915 [2024-07-25 17:09:06.935128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.915 [2024-07-25 17:09:06.935355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.915 [2024-07-25 17:09:06.935364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.915 [2024-07-25 17:09:06.935371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.915 [2024-07-25 17:09:06.938871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.915 [2024-07-25 17:09:06.947958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.915 [2024-07-25 17:09:06.948720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.915 [2024-07-25 17:09:06.948758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.915 [2024-07-25 17:09:06.948768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:06.949005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:06.949232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:06.949242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:06.949249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:06.952747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:06.961828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:06.962643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:06.962680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:06.962691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:06.962927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:06.963147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:06.963156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:06.963163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:06.966892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:06.975772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:06.976576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:06.976614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:06.976624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:06.976860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:06.977081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:06.977095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:06.977102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:06.980610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:06.989684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:06.990518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:06.990556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:06.990566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:06.990802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:06.991022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:06.991031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:06.991039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:06.994543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.003616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.004359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.004378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:07.004386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:07.004603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:07.004819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:07.004827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:07.004834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:07.008331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.017401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.018129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.018145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:07.018152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:07.018373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:07.018589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:07.018597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:07.018604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:07.022097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.031197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.031846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.031861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:07.031869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:07.032084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:07.032305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:07.032313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:07.032320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:07.035811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.045080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.045865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.045902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:07.045912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:07.046149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:07.046376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:07.046385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:07.046393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:07.049900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.058972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.059439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.059476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:07.059488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:07.059728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:07.059948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:07.059956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:07.059964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:07.063480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.072760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.073558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.073596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.916 [2024-07-25 17:09:07.073606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.916 [2024-07-25 17:09:07.073849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.916 [2024-07-25 17:09:07.074069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.916 [2024-07-25 17:09:07.074078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.916 [2024-07-25 17:09:07.074085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.916 [2024-07-25 17:09:07.077593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.916 [2024-07-25 17:09:07.086667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.916 [2024-07-25 17:09:07.087302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.916 [2024-07-25 17:09:07.087339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.087351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.087591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.087811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.087820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.087828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.091334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.100408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.101224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.101261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.101273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.101513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.101733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.101742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.101749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.105257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.114341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.115149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.115186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.115197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.115441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.115661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.115670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.115682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.119180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.128255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.128835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.128872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.128883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.129119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.129346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.129356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.129364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.132863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.142142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.142603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.142640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.142652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.142892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.143112] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.143121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.143129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.146649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.155935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.156744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.156782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.156792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.157029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.157254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.157263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.157271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.160783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.169857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.170654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.170691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.170702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.170938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.171158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.171167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.171174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.917 [2024-07-25 17:09:07.174681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.917 [2024-07-25 17:09:07.183754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.917 [2024-07-25 17:09:07.184562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.917 [2024-07-25 17:09:07.184600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:46.917 [2024-07-25 17:09:07.184611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:46.917 [2024-07-25 17:09:07.184847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:46.917 [2024-07-25 17:09:07.185067] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.917 [2024-07-25 17:09:07.185075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.917 [2024-07-25 17:09:07.185083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.188590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.197663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.198152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.198170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.198178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.198400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.198616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.198624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.198631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.202123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.211400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.212094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.212110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.212117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.212342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.212559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.212567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.212574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.216066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.225134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.225905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.225954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.226194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.226422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.226432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.226439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.229939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.239014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.239803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.239841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.239854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.240094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.240323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.240332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.240340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.243840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.252926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.253546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.253584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.253594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.253831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.254051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.254060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.254072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.257588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.266675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.267520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.267558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.267568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.267806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.268026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.268035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.268042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.271552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.280423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.281262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.281299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.281311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.281552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.179 [2024-07-25 17:09:07.281772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.179 [2024-07-25 17:09:07.281781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.179 [2024-07-25 17:09:07.281789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.179 [2024-07-25 17:09:07.285301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.179 [2024-07-25 17:09:07.294169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.179 [2024-07-25 17:09:07.294507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.179 [2024-07-25 17:09:07.294526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.179 [2024-07-25 17:09:07.294535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.179 [2024-07-25 17:09:07.294751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.294968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.294976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.294983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.298487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.307970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.308749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.308791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.308802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.309038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.309267] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.309276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.309284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.312786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.321864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.322659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.322697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.322708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.322945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.323166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.323175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.323182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.326687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.335768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.336604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.336642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.336652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.336889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.337110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.337118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.337126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.340635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.349515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.350245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.350283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.350293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.350530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.350755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.350764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.350772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.354282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.363371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.364230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.364268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.364278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.364516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.364736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.364745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.364754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.368265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.377139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.377890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.377939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.378176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.378403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.378413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.378420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.381921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.390997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.391724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.391762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.391772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.392009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.392238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.392248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.392255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.395764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.404774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.405600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.405637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.405648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.405884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.406105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.406114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.406121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.409629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 [2024-07-25 17:09:07.418699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.180 [2024-07-25 17:09:07.419552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.180 [2024-07-25 17:09:07.419590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.180 [2024-07-25 17:09:07.419602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.180 [2024-07-25 17:09:07.419842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.180 [2024-07-25 17:09:07.420062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.180 [2024-07-25 17:09:07.420071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.180 [2024-07-25 17:09:07.420078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.180 [2024-07-25 17:09:07.423587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.180 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.180 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:47.180 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.180 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.180 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.181 [2024-07-25 17:09:07.432455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.181 [2024-07-25 17:09:07.433304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.181 [2024-07-25 17:09:07.433342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.181 [2024-07-25 17:09:07.433355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.181 [2024-07-25 17:09:07.433596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.181 [2024-07-25 17:09:07.433818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.181 [2024-07-25 17:09:07.433826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.181 [2024-07-25 17:09:07.433834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.181 [2024-07-25 17:09:07.437352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.181 [2024-07-25 17:09:07.446236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.181 [2024-07-25 17:09:07.446854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.181 [2024-07-25 17:09:07.446873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.181 [2024-07-25 17:09:07.446880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.181 [2024-07-25 17:09:07.447097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.181 [2024-07-25 17:09:07.447320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.181 [2024-07-25 17:09:07.447329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.181 [2024-07-25 17:09:07.447336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.181 [2024-07-25 17:09:07.450830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.442 [2024-07-25 17:09:07.460110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.442 [2024-07-25 17:09:07.460821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.442 [2024-07-25 17:09:07.460837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.442 [2024-07-25 17:09:07.460846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.442 [2024-07-25 17:09:07.461062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.442 [2024-07-25 17:09:07.461282] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.442 [2024-07-25 17:09:07.461290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.442 [2024-07-25 17:09:07.461297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.442 [2024-07-25 17:09:07.464803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.442 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.442 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.442 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.442 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.442 [2024-07-25 17:09:07.473873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.442 [2024-07-25 17:09:07.474573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.442 [2024-07-25 17:09:07.474589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.442 [2024-07-25 17:09:07.474597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.442 [2024-07-25 17:09:07.474813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.442 [2024-07-25 17:09:07.475029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.442 [2024-07-25 17:09:07.475036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.442 [2024-07-25 17:09:07.475043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.442 [2024-07-25 17:09:07.476047] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.442 [2024-07-25 17:09:07.478543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.442 [2024-07-25 17:09:07.487609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.442 [2024-07-25 17:09:07.488294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.442 [2024-07-25 17:09:07.488332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.442 [2024-07-25 17:09:07.488344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.442 [2024-07-25 17:09:07.488584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.442 [2024-07-25 17:09:07.488804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.442 [2024-07-25 17:09:07.488813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.442 [2024-07-25 17:09:07.488820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.443 [2024-07-25 17:09:07.492330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.443 [2024-07-25 17:09:07.501406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.443 [2024-07-25 17:09:07.502255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.443 [2024-07-25 17:09:07.502293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.443 [2024-07-25 17:09:07.502305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.443 [2024-07-25 17:09:07.502545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.443 [2024-07-25 17:09:07.502766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.443 [2024-07-25 17:09:07.502775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.443 [2024-07-25 17:09:07.502782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.443 [2024-07-25 17:09:07.506293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.443 Malloc0 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.443 [2024-07-25 17:09:07.515161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.443 [2024-07-25 17:09:07.515911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.443 [2024-07-25 17:09:07.515930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.443 [2024-07-25 17:09:07.515938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.443 [2024-07-25 17:09:07.516156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.443 [2024-07-25 17:09:07.516382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.443 [2024-07-25 17:09:07.516392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.443 [2024-07-25 17:09:07.516399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.443 [2024-07-25 17:09:07.519896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.443 [2024-07-25 17:09:07.528972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.443 [2024-07-25 17:09:07.529754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.443 [2024-07-25 17:09:07.529792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef13b0 with addr=10.0.0.2, port=4420 00:29:47.443 [2024-07-25 17:09:07.529802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef13b0 is same with the state(5) to be set 00:29:47.443 [2024-07-25 17:09:07.530039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef13b0 (9): Bad file descriptor 00:29:47.443 [2024-07-25 17:09:07.530266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.443 [2024-07-25 17:09:07.530276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.443 [2024-07-25 17:09:07.530284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:47.443 [2024-07-25 17:09:07.533785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.443 [2024-07-25 17:09:07.538951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.443 [2024-07-25 17:09:07.542864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.443 17:09:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1610468 00:29:47.443 [2024-07-25 17:09:07.619114] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.442 00:29:57.442 Latency(us) 00:29:57.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.442 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:57.442 Verification LBA range: start 0x0 length 0x4000 00:29:57.442 Nvme1n1 : 15.00 8546.49 33.38 9818.31 0.00 6944.72 539.31 20862.29 00:29:57.442 =================================================================================================================== 00:29:57.442 Total : 8546.49 33.38 9818.31 0.00 6944.72 539.31 20862.29 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.442 rmmod nvme_tcp 00:29:57.442 rmmod nvme_fabrics 00:29:57.442 rmmod nvme_keyring 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1611872 ']' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1611872 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1611872 ']' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1611872 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611872 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611872' 00:29:57.442 killing process with pid 1611872 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1611872 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1611872 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.442 17:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:58.384 00:29:58.384 real 0m27.773s 00:29:58.384 user 0m59.788s 00:29:58.384 sys 0m8.265s 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.384 ************************************ 00:29:58.384 END TEST nvmf_bdevperf 00:29:58.384 ************************************ 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.384 ************************************ 00:29:58.384 START TEST nvmf_target_disconnect 00:29:58.384 ************************************ 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:58.384 * Looking for test storage... 00:29:58.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.384 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.385 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.645 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.646 17:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:05.238 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:05.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.238 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:05.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:05.239 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.239 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:30:05.501 00:30:05.501 --- 10.0.0.2 ping statistics --- 00:30:05.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.501 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:30:05.501 00:30:05.501 --- 10.0.0.1 ping statistics --- 00:30:05.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.501 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 ************************************ 00:30:05.501 START TEST nvmf_target_disconnect_tc1 00:30:05.501 ************************************ 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.501 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:05.763 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.763 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:05.763 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.763 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:05.763 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:05.763 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.763 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.763 [2024-07-25 17:09:25.870192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.763 [2024-07-25 17:09:25.870256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137fe20 with addr=10.0.0.2, port=4420 00:30:05.763 [2024-07-25 17:09:25.870282] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.763 [2024-07-25 17:09:25.870293] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.763 [2024-07-25 17:09:25.870301] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:05.763 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:05.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:05.764 Initializing NVMe Controllers 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:05.764 00:30:05.764 real 0m0.113s 00:30:05.764 user 0m0.053s 00:30:05.764 sys 0m0.059s 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.764 ************************************ 00:30:05.764 END TEST nvmf_target_disconnect_tc1 00:30:05.764 ************************************ 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:05.764 ************************************ 00:30:05.764 START TEST nvmf_target_disconnect_tc2 00:30:05.764 ************************************ 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1618294 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1618294 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1618294 ']' 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.764 17:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.764 [2024-07-25 17:09:26.014872] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:30:05.764 [2024-07-25 17:09:26.014931] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.026 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.026 [2024-07-25 17:09:26.101882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.026 [2024-07-25 17:09:26.195190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.026 [2024-07-25 17:09:26.195261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.026 [2024-07-25 17:09:26.195269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.026 [2024-07-25 17:09:26.195281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.026 [2024-07-25 17:09:26.195287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.026 [2024-07-25 17:09:26.195449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:06.026 [2024-07-25 17:09:26.195749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:06.026 [2024-07-25 17:09:26.195910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.026 [2024-07-25 17:09:26.195911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.599 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.599 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:06.599 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:06.599 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.599 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.600 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.600 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.600 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.600 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.886 Malloc0 00:30:06.886 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.887 [2024-07-25 17:09:26.890837] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.887 [2024-07-25 17:09:26.931212] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1618401 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:06.887 17:09:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.887 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.855 17:09:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1618294 00:30:08.855 17:09:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Write completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 Read completed with error (sct=0, sc=8) 00:30:08.855 starting I/O failed 00:30:08.855 [2024-07-25 17:09:28.965034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.855 [2024-07-25 17:09:28.965471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.855 [2024-07-25 17:09:28.965492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.855 qpair failed and we were unable to recover it. 00:30:08.855 [2024-07-25 17:09:28.965917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.855 [2024-07-25 17:09:28.965928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.855 qpair failed and we were unable to recover it. 00:30:08.855 [2024-07-25 17:09:28.966080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.855 [2024-07-25 17:09:28.966090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.855 qpair failed and we were unable to recover it. 00:30:08.855 [2024-07-25 17:09:28.966595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.855 [2024-07-25 17:09:28.966606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.855 qpair failed and we were unable to recover it. 00:30:08.855 [2024-07-25 17:09:28.967070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.855 [2024-07-25 17:09:28.967080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.855 qpair failed and we were unable to recover it. 00:30:08.855 [2024-07-25 17:09:28.967329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.967346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.967847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.967858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.968287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.968298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.968767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.968777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.969256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.969267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.969764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.969774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.970266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.970277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.970656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.970667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.971117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.971127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.971380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.971396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.971893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.971904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.972360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.972371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.972839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.972850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.973026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.973037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.973272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.973283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.973707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.973717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.974186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.974197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.974564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.974574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.975040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.975050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.975391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.975405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.975885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.975895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.976370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.976380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.976634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.976647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.977125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.977135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.977456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.977467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.977811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.977821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.978305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.978315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.978570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.978580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.978896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.978907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.979399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.979409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.979887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.979897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.980266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.980276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.980764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.980774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.981258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.981268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.981726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.981736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.982210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.982221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.982698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.982708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.856 qpair failed and we were unable to recover it. 00:30:08.856 [2024-07-25 17:09:28.983046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.856 [2024-07-25 17:09:28.983057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.983631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.983668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.984138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.984151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.984617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.984628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.984961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.984971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.985426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.985463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.985941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.985952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.986480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.986516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.986903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.986916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.987420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.987456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.987851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.987863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.988112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.988122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.988575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.988586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.988970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.988979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.989420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.989430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.989682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.989692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.990178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.990188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.990532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.990542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.990922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.990932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.991272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.991282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.991755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.991765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.992094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.992104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.992404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.992416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.992791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.992802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.993295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.993308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.993796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.993810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.994274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.994287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.994774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.994787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.995034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.995052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.995535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.995548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.995991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.996002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.996473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.996485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.996957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.996968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.997492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.997536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.998017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.998032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.998473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.998516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.998996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.999011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.857 [2024-07-25 17:09:28.999600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.857 [2024-07-25 17:09:28.999644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.857 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.000130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.000146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.000633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.000647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.001108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.001121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.001714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.001758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.002224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.002240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.002723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.002740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.003221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.003241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.003710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.003726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.004237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.004264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.004775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.004792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.005265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.005282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.005732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.005748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.006195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.006219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.006521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.006540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.006961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.006979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.007457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.007475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.007946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.007962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.008489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.008545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.009004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.009024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.009499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.009556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.010001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.010022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.010535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.010591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.011050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.011070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.011551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.012046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.012064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.012570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.012625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.013125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.013145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.013631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.013656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.014137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.014154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.014718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.014785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.015395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.015461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.015984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.016010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.016589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.016656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.017088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.017114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.017632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.017655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.018136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.018156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.018715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.018737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.858 qpair failed and we were unable to recover it. 00:30:08.858 [2024-07-25 17:09:29.019245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.858 [2024-07-25 17:09:29.019277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.019781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.019801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.020265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.020287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.020796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.020816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.021298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.021319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.021781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.021802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.022282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.022304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.022785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.022806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.023138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.023166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.023674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.023696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.024210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.024232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.024699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.024721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.025229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.025251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.025606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.025634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.026159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.026187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.026717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.026746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.027229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.027259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.027757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.027785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.028264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.028293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.028840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.028868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.029252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.029285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.029778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.029806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.030308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.030337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.030837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.030864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.031340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.031369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.031860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.031888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.032383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.032411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.032891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.032920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.033400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.033428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.033924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.033952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.034437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.034473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.034969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.034997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.035557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.035647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.036228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.036265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.036771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.036801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.037443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.037531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.038107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.038142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.859 qpair failed and we were unable to recover it. 00:30:08.859 [2024-07-25 17:09:29.038531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.859 [2024-07-25 17:09:29.038567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.039060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.039089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.039586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.039615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.040093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.040120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.040597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.040626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.041102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.041130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.041674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.041704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.042046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.042075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.042573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.042603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.043086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.043115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.043666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.043695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.044169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.044196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.044686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.044714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.045158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.045186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.045703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.045732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.046235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.046266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.046805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.046834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.047383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.047433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.047966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.047995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.048582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.048671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.049140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.049182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.049708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.049739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.050246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.050278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.050783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.050812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.051404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.051493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.052066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.052102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.052669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.052759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.053443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.054004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.054038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.054538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.860 [2024-07-25 17:09:29.054569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.860 qpair failed and we were unable to recover it. 00:30:08.860 [2024-07-25 17:09:29.055063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.055092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.055574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.055603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.056079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.056107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.056594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.056634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.057056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.057089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.057587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.057617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.058096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.058123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.058608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.058639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.059126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.059154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.059595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.059624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.060122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.060150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.060542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.060572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.060946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.060974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.061357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.061398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.061875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.061904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.062386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.062415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.062912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.062940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.063328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.063363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.063856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.063885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.064469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.064557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.065150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.065185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.065710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.065741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.066247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.066282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.066782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.066810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.067289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.067318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.067795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.067824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.068312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.068341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.068838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.068866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.069346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.069375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.069877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.069905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.070406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.070437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.070930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.070958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.071448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.071478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.071956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.071983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.072568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.072656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.073240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.073277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.073783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.861 [2024-07-25 17:09:29.073813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.861 qpair failed and we were unable to recover it. 00:30:08.861 [2024-07-25 17:09:29.074349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.074380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.074857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.074885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.075367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.075397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.075896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.075924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.076441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.076470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.076953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.076981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.077565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.077663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.078244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.078282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.078757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.078787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.079278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.079309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.079817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.079845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.080286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.080316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.080819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.080847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.081331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.081360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.081733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.081760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.082244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.082275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.082675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.082703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.083181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.083221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.083594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.083633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.084031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.084059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.084555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.084586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.084969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.084997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.085495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.085525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.086003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.086031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.086409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.086444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.086922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.086949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.087429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.087459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.087842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.087870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.088383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.088412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.088907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.088935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.089432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.089461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.089944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.089971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.090458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.090487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.090979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.091009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.091597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.091686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.092148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.092183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.092711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.092742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.862 [2024-07-25 17:09:29.093262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.862 [2024-07-25 17:09:29.093307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.862 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.093821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.093850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.094308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.094337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.094842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.094872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.095355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.095385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.095953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.095983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.096481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.096512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.096992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.097020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.097605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.097694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.098399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.098500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.099084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.099119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.099590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.099622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.100170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.100199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.100721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.100749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.101260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.101304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.101819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.101846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.102362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.102393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.102892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.102921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.103425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.103455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.103962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.103990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.104575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.104664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.105267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.105320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.105834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.105864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.106373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.106404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.106904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.106933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.107419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.107447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.107942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.107972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.108476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.108566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.109154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.109716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.109746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.110440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.110532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.111090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.111126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.111606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.111638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.112127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.112156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.112550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.112580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.113090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.113119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.113653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.113684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.114189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.863 [2024-07-25 17:09:29.114230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.863 qpair failed and we were unable to recover it. 00:30:08.863 [2024-07-25 17:09:29.114617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.114652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.115025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.115053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.115544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.115574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.116073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.116101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.116625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.116655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.117136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.117163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.117655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.117685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.118198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.118237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.118622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.118651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.119021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.119060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.119563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.119593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.120098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.120127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.120619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.120649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.121033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.121069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.121443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.121472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.121948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.121976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.122460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.122488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.122975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.123003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-07-25 17:09:29.123511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.864 [2024-07-25 17:09:29.123541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:08.864 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.124023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.124053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.124573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.124604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.125105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.125134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.125587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.125616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.126101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.126129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.126607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.126636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.127135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.127164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.127648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.127676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.128176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.128219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.128696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.128725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.129407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.129497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.130054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.130089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.130583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.130614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.131097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.131126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.131639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.131670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.132154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.132183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.132717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.132749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.133440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.133529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.134114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.134150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.134673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.134715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.135448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.135537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.136097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.136131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.136699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.136732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.137216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.137246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.137660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.137688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.138168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.138195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.138685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.138714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.139226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.139257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.139745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.139773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.140273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.140304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.140822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.134 [2024-07-25 17:09:29.140849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.134 qpair failed and we were unable to recover it. 00:30:09.134 [2024-07-25 17:09:29.141266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.141313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.141824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.141853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.142469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.142559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.143025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.143062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.143538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.143569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.144048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.144076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.144570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.144601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.145083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.145111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.145543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.145573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.145962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.145990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.146491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.146521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.147024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.147051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.147442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.147473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.147940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.147969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.148460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.148490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.149004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.149032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.149624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.149715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.150399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.150488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.151075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.151111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.151601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.151633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.152126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.152154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.152710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.152739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.153258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.153303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.153798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.153826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.154310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.154340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.154750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.154779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.155286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.155317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.155772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.155800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.156308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.156349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.156824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.156852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.157334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.157363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.157863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.157891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.158378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.158406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.158890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.158918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.159404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.159434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.159983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.160011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.160518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.160547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.135 qpair failed and we were unable to recover it. 00:30:09.135 [2024-07-25 17:09:29.161029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.135 [2024-07-25 17:09:29.161057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.161536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.161565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.162057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.162084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.162614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.162643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.163145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.163173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.163691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.163721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.164270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.164317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.164823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.164851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.165338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.165368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.165740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.165768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.166272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.166301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.166783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.166811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.167314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.167343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.167827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.167855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.168343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.168372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.168873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.168901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.169383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.169413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.169780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.169807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.170291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.170320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.170808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.170836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.171336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.171366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.171862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.171889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.172383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.172413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.172898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.172926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.173428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.173457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.173968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.173996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.174390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.174419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.174919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.174947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.175517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.175609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.176074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.176110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.176511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.176542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.177025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.177064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.177452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.177496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.178000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.178031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.178519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.178549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.179031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.179058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.179546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.179575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.179969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.136 [2024-07-25 17:09:29.180006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.136 qpair failed and we were unable to recover it. 00:30:09.136 [2024-07-25 17:09:29.180384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.180415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.180783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.180812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.181317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.181346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.181846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.181874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.182305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.182341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.182838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.182866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.183362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.183392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.183775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.183810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.184216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.184249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.184745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.184773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.185276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.185307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.185847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.185875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.186358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.186387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.186888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.186916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.187398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.187427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.187913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.187940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.188439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.188468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.188947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.188974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.189566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.189657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.190187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.190242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.190780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.190810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.191406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.191496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.192081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.192117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.192638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.192670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.193215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.193245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.193752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.193781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.194384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.194476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.195072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.195107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.195628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.195660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.196152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.196180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.196696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.196728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.197407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.197497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.198076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.198112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.198592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.198635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.199039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.199068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.199575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.199605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.200088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.200117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.137 [2024-07-25 17:09:29.200603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.137 [2024-07-25 17:09:29.200632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.137 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.201140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.201168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.201613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.201648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.202123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.202152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.202709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.202738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.203134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.203174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.203694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.203724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.204247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.204278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.204701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.204729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.205225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.205255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.205670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.205699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.206220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.206250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.206792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.206820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.207331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.207360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.207843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.207871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.208490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.208584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.209174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.209227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.209606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.209636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.210135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.210164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.210535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.210564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.211115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.211144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.211731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.211823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.212417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.212508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.213091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.213127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.213602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.213635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.214158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.214186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.214711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.214740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.215268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.215316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.215878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.215906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.216411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.216441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.216953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.138 [2024-07-25 17:09:29.216981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.138 qpair failed and we were unable to recover it. 00:30:09.138 [2024-07-25 17:09:29.217521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.217614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.218230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.218268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.218776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.218806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.219462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.219553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.220139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.220174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.220734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.220775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.221444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.221535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.222075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.222110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.222637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.222670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.223228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.223258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.223741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.223771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.224474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.224566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.225093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.225129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.225654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.225686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.226177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.226217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.226703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.226732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.227115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.227144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.227748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.227841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.228550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.228643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.229194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.229250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.229674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.229704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.230081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.230111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.230643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.230673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.231215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.231245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.231634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.231662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.232165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.232193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.232719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.232747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.233268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.233315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.233750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.233778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.234399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.234493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.235080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.235512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.235547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.236062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.236092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.236490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.236520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.236871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.236899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.237323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.237352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.237834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.237864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.139 [2024-07-25 17:09:29.238332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.139 [2024-07-25 17:09:29.238362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.139 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.238880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.238909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.239424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.239454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.239957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.239987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.240565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.240596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.241110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.241139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.241604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.241634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.242181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.242220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.242712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.242746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.243405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.243497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.243983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.244028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.244549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.244582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.245134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.245164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.245696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.245727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.246107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.246136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.246675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.246705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.247111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.247139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.247635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.247664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.248218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.248247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.248757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.248785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.249534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.249629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.250233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.250271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.250803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.250834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.251454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.251546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.252146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.252182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.252750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.252783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.253174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.253212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.253705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.253734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.254453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.254546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.255101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.255137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.255741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.255834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.256517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.256610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.257225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.257264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.257803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.257832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.258432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.258525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.259090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.259127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.259619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.259650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.260132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.140 [2024-07-25 17:09:29.260162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.140 qpair failed and we were unable to recover it. 00:30:09.140 [2024-07-25 17:09:29.260519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.260549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.261065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.261094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.261550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.261580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.262092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.262120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.262651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.262680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.263058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.263088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.263591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.263620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.264023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.264071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.264494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.264529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.265033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.265063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.265557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.265595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.266146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.266174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.266675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.266704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.267224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.267253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.267729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.267758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.268276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.268324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.268845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.268873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.269275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.269304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.269685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.269723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.270219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.270250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.270757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.270784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.271290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.271319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.271811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.271840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.272333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.272362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.272849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.272877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.273389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.273418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.273734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.273763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.274153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.274181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.274707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.274736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.275246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.275274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.275761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.275789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.276309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.276339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.276831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.276860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.277366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.277395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.277882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.277910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.278428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.278456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.278953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.141 [2024-07-25 17:09:29.278981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.141 qpair failed and we were unable to recover it. 00:30:09.141 [2024-07-25 17:09:29.279527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.279557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.279937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.279965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.280453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.280482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.280967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.280996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.281559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.281653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.282400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.282493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.283099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.283136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.283659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.283690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.284254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.284286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.284851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.284880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.285483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.285576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.286178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.286229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.286747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.286777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.287427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.287533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.288121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.288158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.288680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.288712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.289223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.289252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.289769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.289798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.290450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.290543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.291143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.291179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.291710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.291741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.292278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.292327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.292852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.292882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.293481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.293576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.294112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.294148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.294715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.294747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.295265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.295314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.295849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.295879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.296394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.296424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.296917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.296945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.297434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.297464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.297943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.297970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.298565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.298663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.299274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.299333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.299832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.299861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.300369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.300400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.300915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.300944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.142 [2024-07-25 17:09:29.301436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.142 [2024-07-25 17:09:29.301466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.142 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.301982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.302010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.302591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.302686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.303446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.303541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.304026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.304070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.304572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.304605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.305102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.305131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.305645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.305674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.306192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.306234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.306762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.306791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.307190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.307243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.307776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.307805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.308236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.308267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.308675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.308711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.309090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.309124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.309645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.309675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.310156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.310196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.310716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.310743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.311263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.311311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.311724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.311752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.312325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.312355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.312868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.312895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.313387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.313420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.313914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.313943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.314429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.314458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.314940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.314968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.315554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.315650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.316238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.316277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.316796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.316827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.317324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.317355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.317876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.317908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.318417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.318448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.143 qpair failed and we were unable to recover it. 00:30:09.143 [2024-07-25 17:09:29.318961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.143 [2024-07-25 17:09:29.318989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.319507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.319536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.319983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.320014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.320631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.320728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.321455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.321552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.322249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.322286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.322801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.322831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.323358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.323387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.323843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.323872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.324452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.324483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.324979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.325008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.325450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.325481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.325883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.325917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.326447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.326477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.326885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.326914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.327413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.327442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.327932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.327960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.328453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.328483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.328984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.329013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.329626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.329724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.330233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.330272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.330811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.330840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.331485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.331583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.331977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.332013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.332606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.332715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.333470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.333566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.334116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.334152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.334604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.334649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.334971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.335004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.335502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.335531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.335937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.335966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.336496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.336526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.337019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.337048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.337585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.337614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.338108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.338138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.338663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.338692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.339215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.339245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.144 [2024-07-25 17:09:29.339788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.144 [2024-07-25 17:09:29.339815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.144 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.340455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.340552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.341145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.341181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.341721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.341751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.342284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.342336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.342869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.342900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.343507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.343603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.344223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.344260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.344720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.344749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.345454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.345551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.346151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.346187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.346794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.346825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.347427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.347523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.348126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.348161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.348693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.348726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.349475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.349570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.350231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.350269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.350767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.350797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.351483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.351581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.352184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.352240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.352809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.352839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.353271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.353323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.353852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.353881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.354506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.354604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.355249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.355287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.355805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.355835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.356472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.356572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.357181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.357257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.357783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.357813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.358430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.358530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.359125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.359161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.359728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.359760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.360453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.360553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.361163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.361199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.361738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.361768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.362399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.362499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.363109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.363146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.363755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.145 [2024-07-25 17:09:29.363787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.145 qpair failed and we were unable to recover it. 00:30:09.145 [2024-07-25 17:09:29.364454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.364553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.365155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.365193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.365703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.365734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.366408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.366506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.367106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.367142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.367680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.367713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.368224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.368254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.368790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.368818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.369424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.369524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.370125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.370161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.370710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.370741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.371140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.371169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.371733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.371763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.372445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.372546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.373162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.373198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.373767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.373796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.374418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.374517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.375114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.375150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.375682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.375715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.376127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.376172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.376717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.376749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.377237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.377267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.377705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.377745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.378252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.378287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.378799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.378829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.379375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.379405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.379935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.379963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.380460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.380489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.381007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.381037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.381437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.381482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.382035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.382065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.382424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.382455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.382880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.382910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.383433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.383462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.383964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.383992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.384525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.384554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.385084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.385112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.146 qpair failed and we were unable to recover it. 00:30:09.146 [2024-07-25 17:09:29.385616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.146 [2024-07-25 17:09:29.385647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.386176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.386216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.386784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.386812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.387239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.387273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.387839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.387869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.388489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.388591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.389219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.389258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.389754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.389783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.390412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.390513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.391130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.391165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.391703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.391736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.392455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.392557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.393040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.393649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.393681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.394195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.394239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.394746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.394776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.395423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.395528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.396011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.396046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.396567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.396600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.397113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.397144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.397680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.397710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.398277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.398329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.398867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.398895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.147 [2024-07-25 17:09:29.399422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.147 [2024-07-25 17:09:29.399451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.147 qpair failed and we were unable to recover it. 00:30:09.420 [2024-07-25 17:09:29.399944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.420 [2024-07-25 17:09:29.399975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.420 qpair failed and we were unable to recover it. 00:30:09.420 [2024-07-25 17:09:29.400393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.420 [2024-07-25 17:09:29.400437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.420 qpair failed and we were unable to recover it. 00:30:09.420 [2024-07-25 17:09:29.401011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.420 [2024-07-25 17:09:29.401042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.420 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.401396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.401426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.401840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.401883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.402414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.402444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.402962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.402990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.403484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.403513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.404005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.404034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.404562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.404592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.421 qpair failed and we were unable to recover it. 00:30:09.421 [2024-07-25 17:09:29.405052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.421 [2024-07-25 17:09:29.405079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.405586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.405614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.406130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.406160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.406703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.406733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.407242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.407277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.407644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.407672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.407954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.407983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.408519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.408549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.409070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.409098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.409405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.409440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.409828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.409855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.410377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.422 [2024-07-25 17:09:29.410405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.422 qpair failed and we were unable to recover it. 00:30:09.422 [2024-07-25 17:09:29.410919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.410947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.411478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.411507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.411940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.411967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.412377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.412405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.412932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.412960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.413481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.413511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.414040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.414068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.414579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.423 [2024-07-25 17:09:29.414607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.423 qpair failed and we were unable to recover it. 00:30:09.423 [2024-07-25 17:09:29.415009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.415038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.415575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.415677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.416303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.416341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.416857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.416886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.417402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.417433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.417955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.418004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.418545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.418574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.419081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.419110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.419560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.424 [2024-07-25 17:09:29.419589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.424 qpair failed and we were unable to recover it. 00:30:09.424 [2024-07-25 17:09:29.420115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.420143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.420672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.420701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.421242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.421274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.421806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.421837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.422471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.422572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.423191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.423252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.423567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.423597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.424111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.424140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.424736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.424839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.425 [2024-07-25 17:09:29.425540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.425 [2024-07-25 17:09:29.425642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.425 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.426279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.426346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.426937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.426967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.427486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.427516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.428030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.428059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.428585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.428616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.429126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.429156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.429624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.429654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.426 qpair failed and we were unable to recover it. 00:30:09.426 [2024-07-25 17:09:29.430185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.426 [2024-07-25 17:09:29.430227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.430791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.430823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.431444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.431545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.432193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.432251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.432825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.432856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.433485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.433586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.434232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.434271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.434822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.434852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.435437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.427 [2024-07-25 17:09:29.435539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.427 qpair failed and we were unable to recover it. 00:30:09.427 [2024-07-25 17:09:29.436118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.436154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.436687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.436719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.437272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.437327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.437854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.437882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.438482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.438583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.439192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.439247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.439767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.439797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.440419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.440521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.441126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.441162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.441714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.441747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.442277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.428 [2024-07-25 17:09:29.442346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.428 qpair failed and we were unable to recover it. 00:30:09.428 [2024-07-25 17:09:29.442878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.442906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.443514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.443617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.444239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.444277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.444786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.444816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.445320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.445351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.445873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.445903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.446411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.446441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.446943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.446971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.447473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.447502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.448008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.448036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.448675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.448778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.449493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.449597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.450229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.450266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.450783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.450814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.451423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.451524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.452073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.452108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.452598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.452630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.453138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.453168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.453508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.453538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.454065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.454093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.454502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.454531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.455061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.455091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.455442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.455489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.429 qpair failed and we were unable to recover it. 00:30:09.429 [2024-07-25 17:09:29.456036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.429 [2024-07-25 17:09:29.456066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.456589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.456620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.457025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.457584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.457616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.458000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.458037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.458502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.458532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.458917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.458952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.459457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.459488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.460005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.460034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.460547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.460576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.460980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.461008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.461533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.461562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.462066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.462095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.462631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.462661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.463189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.463229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.463761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.463790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.464187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.464248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.464815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.464843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.465432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.465533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.466041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.466076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.466594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.466627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.467038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.467068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.467594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.467625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.468133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.468162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.468745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.468776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.469405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.469507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.469992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.470028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.470667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.470768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.471483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.471587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.472114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.472150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.472749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.472787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.473411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.473512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.474132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.474167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.474751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.474783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.475407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.475510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.476119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.476154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.476704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.476735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.477136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.477165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.477695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.477725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.478424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.478527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.479136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.479171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.479749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.479779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.480492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.480596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.481237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.481276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.481694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.481723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.482277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.482331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.482861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.482889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.483496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.483598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.430 [2024-07-25 17:09:29.484172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.430 [2024-07-25 17:09:29.484228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.430 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.484749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.484780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.485468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.485572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.486178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.486235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.486661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.486692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.487103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.487132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.487688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.487718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.488246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.488281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.488828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.488869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.489503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.489605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.490233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.490271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.490737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.490767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.491406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.491508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.492075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.492111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.492613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.492644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.493157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.493185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.493743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.493774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.494184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.494222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.494725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.494753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.495466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.495567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.495961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.495999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.431 [2024-07-25 17:09:29.496626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.431 [2024-07-25 17:09:29.496729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.431 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.497224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.497261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.497822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.497852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.498176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.498214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.498739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.498768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.499491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.499592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.500064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.500101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.500614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.500646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.501169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.501212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.501729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.501761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.502450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.502552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.503163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.503199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.503798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.503827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.504482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.505183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.505243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.505779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.505808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.506507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.506609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.507245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.507284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.507828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.507858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.508498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.508600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.509224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.509263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.509821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.509851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.510485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.510586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.511198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.511252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.511780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.511811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.512425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.512526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.513024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.513060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.513555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.513600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.514009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.514057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.514574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.514605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.515122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.515150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.515648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.515677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.516216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.516247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.516784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.516813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.517433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.517537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.518142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.518178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.518610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.518642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.519161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.519189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.519638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.519684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.520195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.520239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.520727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.520757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.521470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.521571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.432 qpair failed and we were unable to recover it. 00:30:09.432 [2024-07-25 17:09:29.522112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.432 [2024-07-25 17:09:29.522148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.522702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.522734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.523184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.523230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.523727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.523756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.524157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.524185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.524733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.524762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.525416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.525518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.526128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.526165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.526602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.526633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.527043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.527091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.527616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.527648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.528145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.528173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.528711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.528742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.529457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.529561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.530062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.530109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.530652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.530686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.531169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.531198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.531772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.531802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.532445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.532547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.533158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.533194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.533754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.533786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.534474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.534577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.535185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.535251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.535817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.535847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.536481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.536584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.537195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.537272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.537815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.537844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.538475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.538577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.539193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.539251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.539751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.539782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.540400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.540502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.541116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.541152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.541691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.541723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.542406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.542507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.543120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.543156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.543692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.543724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.544238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.544272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.544777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.544807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.545383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.545412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.545916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.545944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.546452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.546482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.547036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.547063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.547573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.547603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.433 qpair failed and we were unable to recover it. 00:30:09.433 [2024-07-25 17:09:29.548109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.433 [2024-07-25 17:09:29.548137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.548641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.548669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.549234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.549267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.549787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.549815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.550336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.550367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.550682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.550711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.551220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.551250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.551670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.551699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.552212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.552763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.552792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.553198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.553242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.553738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.553768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.554401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.554504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.555139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.555174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.555519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.555552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.555955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.555990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.556608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.556709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.557231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.557269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.557819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.557849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.558276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.558331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.558874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.558903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.559498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.559599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.560098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.560146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.560694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.560727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.561129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.561158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.561511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.561542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.562046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.562076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.562631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.562661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.563185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.563224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.563723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.563751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.564400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.564504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.565120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.565156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.565640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.565672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.566231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.566263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.566669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.566703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.567087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.567117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.567537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.567570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.568087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.568115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.568616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.568646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.569148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.569178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.569693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.569723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.570271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.570325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.570864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.570892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.571527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.571628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.572250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.572289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.572826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.572857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.573474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.573576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.574137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.574173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.574747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.434 [2024-07-25 17:09:29.574778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.434 qpair failed and we were unable to recover it. 00:30:09.434 [2024-07-25 17:09:29.575409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.575512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.576016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.576057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.576582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.576613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.577121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.577150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.577691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.577721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.578270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.578324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.578861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.578889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.579394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.579424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.579930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.579959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.580481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.580511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.581042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.581072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.581587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.581617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.582114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.582142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.582655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.582698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.583191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.583236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.583648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.583677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.584221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.584251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.584780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.584810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.585309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.585340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.585889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.585918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.586481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.586583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.587193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.587249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.587774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.587805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.588438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.588540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.589037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.589083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.589675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.589709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.590231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.590262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.590824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.590853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.591355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.591386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.591904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.591934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.592379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.592410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.592939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.592968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.593321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.593352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.593862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.593892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.594405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.594435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.435 qpair failed and we were unable to recover it. 00:30:09.435 [2024-07-25 17:09:29.594937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.435 [2024-07-25 17:09:29.594966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.595499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.595528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.596038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.596068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.596665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.597175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.597222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.597647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.597689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.598221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.598253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.598815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.598844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.599444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.599546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.600141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.600179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.600718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.600751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.601417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.601519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.602092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.602128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.602667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.602700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.603222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.603252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.603674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.603702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.604228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.604259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.604744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.604772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.605407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.605522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.606133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.606169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.606620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.606652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.607177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.607239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.607744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.607773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.608412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.608513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.609114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.609150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.609750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.609783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.610468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.610570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.611221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.611260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.611760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.611789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.612475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.612577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.613183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.613241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.613804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.613834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.614489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.614593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.615149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.615184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.615733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.615764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.616165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.616214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.616640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.616670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.617172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.617213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.617675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.617703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.618218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.618248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.618803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.618832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.619340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.619371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.619866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.619894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.620511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.620614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.621234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.621272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.621804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.621835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.622456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.622556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.623153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.623190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.623725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.623756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.624446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.624548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.436 [2024-07-25 17:09:29.625052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.436 [2024-07-25 17:09:29.625090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.436 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.625639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.625670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.626163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.626192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.626637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.626666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.627199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.627254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.627645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.627674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.628460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.628562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.629174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.629229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.629758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.629799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.630424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.630527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.631137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.631173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.631748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.631780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.632271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.632326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.632870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.632898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.633513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.633615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.634277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.634343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.634942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.634973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.635499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.635530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.636043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.636072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.636577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.636607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.637126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.637155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.637675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.637707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.638234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.638268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.638797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.638826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.639345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.639374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.639777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.639811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.640332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.640362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.640867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.640895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.641454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.641483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.642013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.642041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.642551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.642579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.643090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.643118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.643623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.643652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.644179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.644219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.644788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.644816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.645250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.645301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.645845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.645874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.646472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.646574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.647185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.647238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.647698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.647727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.648280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.648336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.648798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.648836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.649409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.649441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.649951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.649979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.650500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.650529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.650924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.650965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.651575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.651677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.652295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.652332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.652747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.652794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.653315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.653346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.653843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.653872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.654410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.654440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.654762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.437 [2024-07-25 17:09:29.654793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.437 qpair failed and we were unable to recover it. 00:30:09.437 [2024-07-25 17:09:29.655309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.655338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.655843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.655871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.656373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.656402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.656931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.656959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.657494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.657523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.658050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.658079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.658594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.658623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.659140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.659169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.659682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.659711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.660245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.660279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.660828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.660856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.661378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.661408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.661909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.661938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.662443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.662472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.662975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.663002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.663624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.663728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.664448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.664550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.665178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.665235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.665800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.665829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.666473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.666575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.667158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.667194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.667655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.667702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.668170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.668225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.668725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.668754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.669449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.669550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.670166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.670219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.670650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.670697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.671147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.671177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.671462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.671494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.672047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.672076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.672582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.672612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.673142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.673171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.673752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.673783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.674407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.674509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.674994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.438 [2024-07-25 17:09:29.675039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.438 qpair failed and we were unable to recover it. 00:30:09.438 [2024-07-25 17:09:29.675529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.675576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.676084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.676113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.676675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.676704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.677136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.677165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.677520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.677552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.678096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.678124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.678665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.678696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.679190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.679230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.679769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.679798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.680419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.680522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.681143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.681179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.681673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.681704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.682114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.682143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.439 [2024-07-25 17:09:29.682684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.439 [2024-07-25 17:09:29.682714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.439 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.683270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.683325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.683866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.683894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.684517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.684619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.685242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.685280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.685808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.685838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.686352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.686384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.686876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.686904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.687437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.687466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.687966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.687995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.688535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.688564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.689080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.689109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.689611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.689643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.690178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.690218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.690752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.714 [2024-07-25 17:09:29.690782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.714 qpair failed and we were unable to recover it. 00:30:09.714 [2024-07-25 17:09:29.691412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.691515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.692128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.692163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.692741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.692772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.693414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.693515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.694113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.694149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.694720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.694752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.695156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.695188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.695725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.695755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.696402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.696505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.697161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.697195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.697749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.697779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.698402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.698503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.699097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.699133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.699595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.699627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.700166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.700196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.700728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.700757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.701453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.701555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.702170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.702226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.702548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.702578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.703122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.703151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.703678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.703708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.704227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.704257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.704702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.704730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.705238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.705269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.705765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.705794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.706414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.706515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.707131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.707168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.707690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.707722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.708269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.708323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.708853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.708881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.709479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.709581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.710218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.710256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.710668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.710698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.711445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.711546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.712045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.712094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.712682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.712715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.713229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.713260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.713756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.713785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.714397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.714497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.715114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.715162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.715687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.715719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.716485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.716586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.716972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.717019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.715 [2024-07-25 17:09:29.717580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.715 [2024-07-25 17:09:29.717613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.715 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.718166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.718195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.718733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.718762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.719165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.719193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.719709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.719737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.720447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.720548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.721162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.721198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.721672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.721703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.722163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.722192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.722734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.722763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.723467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.723571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.724245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.724285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.724828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.724858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.725380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.725411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.725922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.725952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.726618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.726719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.727496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.727597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.728186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.728240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.728762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.728794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.729092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.729120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.729688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.729718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.730267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.730319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.730857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.730886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.731471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.731502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.732003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.732031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.732573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.732676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.733459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.733561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.734108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.734144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.734688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.734720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.735109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.735138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.735648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.735678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.736181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.736218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.736773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.736801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.737486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.737587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.738219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.738257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.738792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.738822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.739120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.739177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.739737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.739768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.740100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.740129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.740669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.740699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.741266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.741295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.741703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.741730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.742210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.742239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.742764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.742793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.743301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.743841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.743869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.744376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.744405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.744716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.744746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.745286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.745316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.745604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.716 [2024-07-25 17:09:29.745633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.716 qpair failed and we were unable to recover it. 00:30:09.716 [2024-07-25 17:09:29.746167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.746196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.746599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.746628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.747136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.747165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.747674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.747703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.748218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.748247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.748669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.748697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.749215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.749245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.749801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.750332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.750362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.750758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.750786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.751449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.751554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.752032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.752068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.752638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.752669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.753178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.753218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.753624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.753652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.754245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.754277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.754800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.754830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.755426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.755528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.756142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.756178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.756708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.756738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.757463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.757565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.758181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.758240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.758789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.758818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.759434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.759534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.760146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.760182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.760733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.760765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.761234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.761278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.761824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.761852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.762477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.762579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.763189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.763244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.763765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.763795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.764411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.764513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.765114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.765149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.765683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.765715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.766216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.766246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.766649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.766678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.767191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.767234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.767623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.767651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.768224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.768253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.768741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.768769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.769342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.769372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.769684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.769713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.770189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.770234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.770781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.770809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.717 [2024-07-25 17:09:29.771444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.717 [2024-07-25 17:09:29.771546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.717 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.772051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.772087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.772598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.772629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.773078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.773106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.773661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.773690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.774097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.774141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.774673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.774705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.775224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.775272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.775819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.775848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.776363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.776393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.776751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.776779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.777317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.777347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.777801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.778255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.778284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.778704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.778732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.779219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.779249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.779786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.779813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.780264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.780292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.780827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.780854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.781373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.781402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.781983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.782010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.782557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.782586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.783088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.783123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.783616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.783645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.784149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.784177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.784617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.784646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.785170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.785198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.785717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.785745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.786280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.786333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.786861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.786889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.787494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.787595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.788197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.788251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.788802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.788832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.789465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.789566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.790180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.790234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.790816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.718 [2024-07-25 17:09:29.790845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.718 qpair failed and we were unable to recover it. 00:30:09.718 [2024-07-25 17:09:29.791489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.791591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.792070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.792106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.792701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.793267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.793321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.793858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.793886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.794394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.794425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.794936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.794964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.795581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.795682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.796167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.796219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.796787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.796817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.797445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.797546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.798012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.798048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.798576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.798608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.799036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.799066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.799569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.799602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.800123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.800152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.800659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.800689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.801189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.801228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.801765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.801793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.802459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.802562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.803076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.803113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.803641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.803673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.804190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.804231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.804767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.804794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.805208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.805237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.805737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.805766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.806192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.806255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.806773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.806801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.807268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.807301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.807821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.807849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.808384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.808413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.808943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.808971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.809570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.809672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.810478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.810580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.811193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.811271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.811813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.811843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.812536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.812638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.813277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.813343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.813875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.813905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.814450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.814552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.815155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.815190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.815804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.815835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.816486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.816587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.719 qpair failed and we were unable to recover it. 00:30:09.719 [2024-07-25 17:09:29.817188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.719 [2024-07-25 17:09:29.817241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.817808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.817838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.818457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.818558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.819192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.819248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.819764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.819794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.820422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.820522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.821183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.821237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.821767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.821797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.822436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.822539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.823142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.823177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.823793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.823825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.824435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.824537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.825139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.825174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.825751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.826453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.826554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.827164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.827218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.827735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.827765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.828451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.828552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.829166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.829218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.829543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.829574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.830068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.830096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.830411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.830442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.830972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.831000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.831600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.831714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.832447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.832548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.832945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.832980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.833391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.833423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.833994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.834022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.834539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.834570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.835074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.835104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.835415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.835448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.835975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.836003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.836613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.836715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.837448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.837550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.837950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.837985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.838505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.838536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.838961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.838990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.839470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.839527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.840078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.840109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.840651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.840681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.841119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.841146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.841683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.841713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.842068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.842096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.842526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.720 [2024-07-25 17:09:29.842557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.720 qpair failed and we were unable to recover it. 00:30:09.720 [2024-07-25 17:09:29.842945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.842973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.843474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.843503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.844012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.844039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.844535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.844636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.845277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.845344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.845931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.845960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.846536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.846567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.847067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.847095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.847594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.847623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.848121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.848149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.848660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.848689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.849211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.849241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.849704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.849732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.850280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.850333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.850872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.850902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.851524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.851626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.852450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.852552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.853162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.853199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.853741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.853771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.854405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.854518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.855120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.855156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.855693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.855725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.856431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.856533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.857015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.857051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.857537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.857568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.858068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.858097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.858635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.858665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.859166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.859194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.859716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.859746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.860267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.860320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.860935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.860964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.861462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.861564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.862155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.862190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.862585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.862620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.863212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.863243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.863689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.863718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.864198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.864235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.864755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.864783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.865449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.865549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.866167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.866224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.866641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.866687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.867030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.867059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.867579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.867610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.721 qpair failed and we were unable to recover it. 00:30:09.721 [2024-07-25 17:09:29.868114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.721 [2024-07-25 17:09:29.868143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.868640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.868670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.869078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.869120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dd4000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.869298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10caf20 is same with the state(5) to be set 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Write completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 Read completed with error (sct=0, sc=8) 00:30:09.722 starting I/O failed 00:30:09.722 [2024-07-25 17:09:29.869757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:09.722 [2024-07-25 17:09:29.870081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.870096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.870625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.870637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.871097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.871106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.871583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.871631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.872141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.872151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.872746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.872794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.873427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.873476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.874002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.874012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.874477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.874487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.874954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.874962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.875439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.875448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.875922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.875929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.876389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.876397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.876860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.876869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.877370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.877656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.877677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.878156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.878163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.878414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.878433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.878935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.878943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.879515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.879528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.879904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.879912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.880405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.880412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.880892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.722 [2024-07-25 17:09:29.880900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.722 qpair failed and we were unable to recover it. 00:30:09.722 [2024-07-25 17:09:29.881168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.881175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.881619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.881626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.882088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.882597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.882607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.883078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.883086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.883596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.883606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.884083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.884091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.884574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.884583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.885054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.885063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.885529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.885537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.885994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.886001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.886475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.886485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.886945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.886953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.887415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.887427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.887820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.887829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.888390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.888399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.888870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.888878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.889344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.889840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.889849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.890312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.890319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.890800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.890808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.891272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.891280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.891854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.891863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.892329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.892337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.892797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.892804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.893270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.893279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.893657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.893666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.894126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.894134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.894370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.894386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.894808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.894816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.895281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.895289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.895748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.895755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.896219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.896229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.896723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.896732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.897195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.897208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.897664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.897673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.898157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.898175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.898644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.898653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.898958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.898965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.899518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.899565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.900043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.900053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.900608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.900651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.901172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.723 [2024-07-25 17:09:29.901183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.723 qpair failed and we were unable to recover it. 00:30:09.723 [2024-07-25 17:09:29.901750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.901795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.902422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.902468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.902981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.902991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.903583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.903627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.903974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.903982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.904584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.904628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.905135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.905144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.905649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.905657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.906119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.906127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.906707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.906752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.907422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.907466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.907976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.907986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.908564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.908608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.909120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.909128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.909598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.909605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.910060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.910067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.910619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.910662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.911171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.911180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.911735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.911779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.912413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.912457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.912967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.912977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.913466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.913511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.914018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.914027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.914586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.914631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.915108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.915117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.915662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.915670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.916126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.916135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.916700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.916744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.917399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.917444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.917964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.917973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.918526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.918571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.919068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.919077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.919642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.919687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.920162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.920176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.920741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.920785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.921157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.921167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.921742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.921786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.922400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.922960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.922968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.923528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.923572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.924068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.924078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.924707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.924751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.925420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.724 [2024-07-25 17:09:29.925464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.724 qpair failed and we were unable to recover it. 00:30:09.724 [2024-07-25 17:09:29.925965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.925974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.926526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.926570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.926904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.926914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.927500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.927544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.927944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.927953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.928528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.928572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.928938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.928948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.929264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.929273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.929632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.929639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.930133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.930140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.930517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.930525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.931009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.931016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.931474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.931482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.931936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.931943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.932217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.932237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.932734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.932742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.933398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.933442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.933734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.933760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.934269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.934277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.934757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.934766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.935248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.935256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.935731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.935738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.936104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.936112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.936610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.936618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.937091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.937099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.937587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.937594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.938044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.938601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.938641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.939155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.939164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.939625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.939633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.940098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.940110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.940668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.940709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.941382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.941425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.941804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.941816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.942292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.942300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.942683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.942690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.943109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.943117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.943606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.943613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.944065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.944072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.944624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.944664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.945178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.945187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.945563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.945571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.946165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.946172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.725 [2024-07-25 17:09:29.946611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.725 [2024-07-25 17:09:29.946652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.725 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.947163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.947172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.947723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.947765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.948418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.948461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.948938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.948947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.949502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.949543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.950055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.950064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.950615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.950656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.951162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.951171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.951741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.951782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.952458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.952499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.953001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.953010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.953606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.953648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.954135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.954144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.954743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.954784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.955155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.955165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.955734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.955775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.956448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.956489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.956929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.956938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.957546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.957586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.958055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.958064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.958515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.958555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.958978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.958988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.959580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.959621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.960128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.960137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.960690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.960698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.961196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.961208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.961761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.961806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.962406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.962447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.962951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.962960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.963515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.963556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.964073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.964082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.964545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.964585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.965059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.965069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.965635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.965675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.966178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.966188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.966789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.966830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.726 [2024-07-25 17:09:29.967194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.726 [2024-07-25 17:09:29.967216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.726 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.967793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.967834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.968446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.968488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.968975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.968984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.969583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.969625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.970108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.970118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.970733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.970775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.971404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.971446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.971936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.971945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.972549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.972590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.973096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.973106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.973615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.973624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.974098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.974106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.974591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.974600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.974970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.974979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.975565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.975607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.976010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.976020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.727 [2024-07-25 17:09:29.976596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.727 [2024-07-25 17:09:29.976643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.727 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-25 17:09:29.977025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-25 17:09:29.977037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-25 17:09:29.977642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-25 17:09:29.977684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-25 17:09:29.978187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-25 17:09:29.978197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-25 17:09:29.978650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-25 17:09:29.978693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-25 17:09:29.979196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-25 17:09:29.979215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-25 17:09:29.979648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.979689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.980186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.980196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.980760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.980802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.981391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.981434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.981721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.981730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.982178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.982186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.982678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.982687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.983152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.983160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.983720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.983763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.984421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.984464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.984949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.984958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.985524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.985566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.986097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.986106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.986355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.986362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.986878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.986885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.987348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.987356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.987974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.987980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.988546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.988587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.989106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.989116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.989586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.989595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.990098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.990106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.990475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.990483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.990988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.990995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.991627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.991668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.991952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.991966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.992432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.992442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.992902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.992909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.993461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.993502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.993802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.993811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.994293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.994301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.994786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.994793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.994945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.994952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.995416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.995423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.995893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.995900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.996250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.996263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.996796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.996803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-25 17:09:29.997262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-25 17:09:29.997269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:29.997705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:29.997713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:29.998208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:29.998216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:29.998673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:29.998680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:29.999451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:29.999492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:29.999997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.000006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.000581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.000622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.001114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.001124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.001610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.001618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.002440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.002450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.002804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.002811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.003307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.003315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.003690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.003698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.004183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.004191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.004687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.004696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.005169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.005177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.005650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.005658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.006157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.006165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.006722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.006763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.007417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.007459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.007870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.007880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.008457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.008498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.009064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.009074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.009642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.009683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.010197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.010216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.010704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.010713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.011215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.011225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.011686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.011694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.012188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.012196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.012653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.012661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.013116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.013124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.013672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.013710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.014212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.014223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.014670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.014678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.015178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.015185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-25 17:09:30.015554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-25 17:09:30.015593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.016218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.016230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.020215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.020244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.020728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.020744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.021495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.021534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.022100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.022108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.022583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.022590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.022971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.022978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.023624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.023663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.023939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.023948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.024460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.024499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.024979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.024989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.025575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.025613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.026154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.026163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.026631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.026640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.027143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.027152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.027526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.027566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.027990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.027999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.028552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.028591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.029055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.029063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.029618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.029658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.030171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.030181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.030848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.030887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.031501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.031540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.032042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.032051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.032617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.032656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.033173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.033182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.033742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.033781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.034132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-25 17:09:30.034141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-25 17:09:30.034741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.034781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.035415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.035453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.035967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.035976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.036530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.036570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.037059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.037068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.037616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.037655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.038041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.038050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.038641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.038680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.039195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.039211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.039769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.039808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.040439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.040478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.040995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.041006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.041506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.041545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.042075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.042084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.042662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.042707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.043216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.043226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.043709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.043718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.044095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.044103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.044383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.044391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.044759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.044768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.045116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.045126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.045512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.045524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.046031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.046038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.046303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.046311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.046796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.046803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.047257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.047264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.047723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.047730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.047989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.048006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.048259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.048276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.048735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.048743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.049089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.049098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.049325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.049335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.049821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.049829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.050280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.050287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.050733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.050739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.051262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.051269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.051503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.051510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.001 qpair failed and we were unable to recover it. 00:30:10.001 [2024-07-25 17:09:30.051726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.001 [2024-07-25 17:09:30.051738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.052199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.052212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.052430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.052440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.052931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.052938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.053386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.053394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.053957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.053964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.054411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.054418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.054913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.054921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.055302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.055309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.055779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.055785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.056262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.056270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.056744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.056750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.057207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.057216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.057659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.057666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.058027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.058034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.058492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.058530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.059012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.059021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.059584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.059626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.060131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.060141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.060685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.060722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.061192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.061213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.061654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.061662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.062139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.062147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.062574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.062610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.063148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.063691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.063728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.002 [2024-07-25 17:09:30.064211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.002 [2024-07-25 17:09:30.064220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.002 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.064695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.064702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.065158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.065165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.065739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.065776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.066404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.066441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.066955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.066964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.067513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.067550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.068048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.068607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.068643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.069147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.069156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.069518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.069555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.070023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.070032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.070482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.070519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.071015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.071024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.071569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.071606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.072101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.072110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.072593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.072600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.072969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.072976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.073555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.073591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.074091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.074100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.074572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.074581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.074952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.074959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.075594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.075631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.076136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.076145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.076545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.076583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.077109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.077119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.077605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.077614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.077978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.077986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.078560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.078597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.079094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.079103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.079576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.079583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.079989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.003 [2024-07-25 17:09:30.080001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.003 qpair failed and we were unable to recover it. 00:30:10.003 [2024-07-25 17:09:30.080390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.080427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.080914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.080923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.081513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.081549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.082059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.082068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.082619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.082656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.082939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.083525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.083562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.084061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.084070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.084523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.084559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.085060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.085068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.085620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.085657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.086147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.086157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.086725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.086763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.087414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.087450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.087950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.087959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.088514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.088551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.089011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.089020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.089173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.089180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.089618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.089626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.090087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.090094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.090790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.090827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.091411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.091448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.091749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.091759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.092144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.092151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.092621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.092629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.093007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.093014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.093635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.093672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.094235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.094261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.094599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.094607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.094956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.094963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.095515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.095523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.095872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.095879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.096232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.096240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.004 [2024-07-25 17:09:30.096762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.004 [2024-07-25 17:09:30.096769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.004 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.097249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.097256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.097730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.097737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.098098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.098104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.098407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.098415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.098897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.098904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.099357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.099369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.099878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.099885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.100339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.100346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.100735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.100742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.101247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.101254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.101623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.101630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.102134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.102141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.102599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.102607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.103045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.103052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.103551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.103558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.104046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.104054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.104631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.104668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.105165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.105175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.105721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.105758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.106398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.106434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.106941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.106950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.107198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.107211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.107794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.107831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.108387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.108424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.108940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.108949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.109555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.109592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.110053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.110061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.110627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.110664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.111163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.111172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.111759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.111796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.112179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.112188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.005 [2024-07-25 17:09:30.112784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.005 [2024-07-25 17:09:30.112821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.005 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.113384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.113421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.114014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.114023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.114571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.114608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.114997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.115007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.115580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.115617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.116092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.116100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.116681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.116717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.117073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.117082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.117683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.117720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.118086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.118095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.118397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.118405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.118769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.118776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.119230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.119790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.119801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.120085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.120092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.120579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.120587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.121018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.121024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.121479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.121487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.121779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.121793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.122345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.122353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.122716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.122722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.123095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.123102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.123394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.123401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.123857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.123863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.124220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.124228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.124473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.124488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.124990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.124997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.125477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.125484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.125708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.125718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.126139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.126148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.126602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.126610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.127120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.127126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.127577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.127585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.128041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.128047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.128680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.128718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.129387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.129423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.006 qpair failed and we were unable to recover it. 00:30:10.006 [2024-07-25 17:09:30.129885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.006 [2024-07-25 17:09:30.129894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.130434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.130442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.130942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.130949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.131548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.131585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.132052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.132061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.132632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.132669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.133032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.133042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.133489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.133527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.134084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.134093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.134567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.134575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.135068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.135076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.135443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.135480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.135978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.135987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.136555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.136592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.137101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.137109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.137674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.137712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.138101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.138111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.138590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.138603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.138862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.138869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.139338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.139345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.139792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.139799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.140182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.140189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.140670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.140677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.141145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.141153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.141705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.141742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.142131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.142140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.142616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.142624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.143098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.007 [2024-07-25 17:09:30.143105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.007 qpair failed and we were unable to recover it. 00:30:10.007 [2024-07-25 17:09:30.143674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.143712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.144230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.144255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.144782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.144789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.145288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.145745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.145753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.146209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.146216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.146665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.146672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.147132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.147140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.147410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.147418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.147900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.147908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.148456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.148494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.148966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.148975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.149440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.149477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.149932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.149941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.150416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.150942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.150951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.151402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.151439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.151953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.151961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.152498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.152536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.152991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.153000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.153606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.153642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.154100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.154108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.154573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.154581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.155041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.155047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.155612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.155647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.156007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.156016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.156460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.156494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.156907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.156915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.157494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.157530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.008 qpair failed and we were unable to recover it. 00:30:10.008 [2024-07-25 17:09:30.158018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.008 [2024-07-25 17:09:30.158031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.158545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.158581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.159090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.159099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.159579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.159586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.160041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.160048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.160613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.160648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.161102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.161111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.161716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.161752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.162210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.162219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.162748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.162784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.163426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.163461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.163975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.163984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.164484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.164519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.164975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.164984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.165491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.165526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.165994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.166003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.166546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.166582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.167045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.167053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.167606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.167642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.168124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.168133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.168684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.168721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.169193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.169208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.169689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.169724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.170187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.170196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.170778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.170812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.171395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.171430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.171950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.171958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.172194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.172217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.172709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.172716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.173223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.173243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.173692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.173698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.173927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.174445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.174452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.174898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.174905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.175349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.175356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.175843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.175850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.009 [2024-07-25 17:09:30.176074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.009 [2024-07-25 17:09:30.176087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.009 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.176607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.176615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.177067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.177074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.177515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.177522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.177967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.177978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.178563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.178598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.178967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.178976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.179457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.179491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.180015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.180024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.180438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.180473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.180928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.180937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.181183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.181198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.181661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.181670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.182038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.182044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.182597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.182632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.183096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.183105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.183717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.183753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.184368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.184404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.184908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.184916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.185462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.185500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.186002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.186011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.186443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.186479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.186950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.186960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.187551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.187585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.188071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.188080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.188706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.188740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.189145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.189154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.189698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.189732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.190187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.190196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.190771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.190804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.191406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.191451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.191943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.191952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.192506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.192539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.193039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.193048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.193590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.193623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.194077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.194085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.194635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.194669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.010 [2024-07-25 17:09:30.195120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.010 [2024-07-25 17:09:30.195130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.010 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.195606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.195614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.196095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.196102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.196466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.196474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.196940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.196947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.197402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.197435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.197797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.197807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.198122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.198133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.198598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.198606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.199062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.199069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.199607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.199641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.199996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.200005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.200572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.200605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.200960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.200969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.201533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.201566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.202016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.202024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.202561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.202595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.203046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.203055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.203598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.203632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.204087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.204095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.204547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.204554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.204998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.205006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.205582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.205616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.205991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.206000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.206572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.206605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.207081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.207090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.207392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.207399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.207855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.207861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.208172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.208179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.208668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.208675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.209194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.209204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.209693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.209725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.210191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.210205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 [2024-07-25 17:09:30.210784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.011 [2024-07-25 17:09:30.210818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.011 qpair failed and we were unable to recover it. 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Read completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.011 Write completed with error (sct=0, sc=8) 00:30:10.011 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Write completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 Read completed with error (sct=0, sc=8) 00:30:10.012 starting I/O failed 00:30:10.012 [2024-07-25 17:09:30.211137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.012 [2024-07-25 17:09:30.211526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.211547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.212082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.212094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.212561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.212573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.213012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.213023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.213486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.213498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.213944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.213955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.214527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.214572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.215065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.215079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.215506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.215519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.215989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.216000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.216566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.216612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.217085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.217097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.217561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.217576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.217937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.217947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.218452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.218464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.218974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.218984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.219525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.219537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.220055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.220065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.220515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.220529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.221041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.221051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.221549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.221567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.222032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.222042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.222410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.222422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.222908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.222919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.223360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.223370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.223886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.223897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.224174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.224184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.224660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.224670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.225025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.225037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.012 [2024-07-25 17:09:30.225524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.012 [2024-07-25 17:09:30.225535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.012 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.225980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.225990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.226520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.226564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.226920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.226933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.227413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.227456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.227958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.227972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.228492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.228535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.229032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.229047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.229485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.229529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.229902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.229916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.230125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.230138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.230584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.230595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.231079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.231089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.231622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.231634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.232085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.232096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.232433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.232445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.232916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.232926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.233396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.233407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.233865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.233880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.234244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.234255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.234725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.234735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.235196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.235222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.235703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.235713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.236180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.236189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.236819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.236863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.237472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.237515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.238013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.238025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.238562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.238605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.239079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.239091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.013 qpair failed and we were unable to recover it. 00:30:10.013 [2024-07-25 17:09:30.239542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.013 [2024-07-25 17:09:30.239553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.240019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.240031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.240475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.240486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.240944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.240955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.241547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.241591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.242070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.242085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.242556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.242567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.242941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.242951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.243547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.243590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.244163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.244178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.244615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.244626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.245090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.245100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.245658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.245670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.246124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.246134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.246645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.246689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.247171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.247184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.247654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.247669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.248043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.248054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.248601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.248613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.249081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.249091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.249569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.249581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.249948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.249958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.250420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.250431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.250876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.250886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.251151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.251172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.251459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.251469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.251938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.251950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.252464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.252477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.252969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.252980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.253240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.253258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.253728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.253739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.254124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.254134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.254596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.254606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.255054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.255065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.255434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.255478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.255954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.255968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.256412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.256455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.256944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.014 [2024-07-25 17:09:30.256956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.014 qpair failed and we were unable to recover it. 00:30:10.014 [2024-07-25 17:09:30.257413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.257456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.257953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.257965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.258501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.258544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.258951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.258964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.259442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.259486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.259969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.259982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.260490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.260533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.260944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.260956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.261302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.261314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.015 [2024-07-25 17:09:30.261683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.015 [2024-07-25 17:09:30.261695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.015 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.262161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.262173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.262661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.262673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.263111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.263121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.263598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.263609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.264064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.264074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.264631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.264683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.265187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.265199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.265653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.265664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.266054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.266065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.266538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.266555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.267040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.267052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.267646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.267689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.268177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.268191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.268758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.268800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.269422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.269464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.269959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.269973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.270568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-25 17:09:30.270610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-25 17:09:30.271101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.271114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.271662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.271704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.272178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.272190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.272699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.272711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.273151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.273161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.273639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.273652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.274018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.274029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.274493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.274504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.274876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.274885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.275354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.275365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.275809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.275820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.276311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.276321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.276778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.276789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.277232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.277242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.277715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.277726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.278186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.278196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.278680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.278696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.278960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.278970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.279419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.279429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.279782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.279796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.280256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.280266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.280730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.280750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.281212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.281222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.281686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.281695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.282183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.282193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.282422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.282432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.282772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.282781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.283248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.283259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.283737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.283746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.284074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.284086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.284371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.284381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.284851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.284861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.285242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.285253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.285613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.285623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.286053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.286062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.286417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.286428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.286768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.286778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-25 17:09:30.287228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-25 17:09:30.287238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.287657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.287667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.288116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.288126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.288577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.288587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.288822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.288831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.289166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.289177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.289619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.289630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.290029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.290038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.290506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.290518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.290973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.290983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.291478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.291495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.291862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.291872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.292312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.292322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.292808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.292819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.293301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.293311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.293774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.293784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.294250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.294260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.294711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.294726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.295096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.295106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.295610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.295621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.296085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.296096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.296591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.296601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.297076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.297088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.297598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.297608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.298087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.298098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.298580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.298590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.299109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.299120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.299582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.299592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.300086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.300101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.300579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.300589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.301053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.301063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.301481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.301522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.302055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.302067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.302604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.302646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.303129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.303141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.303727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.303768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.304271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.304285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.304750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.304763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-25 17:09:30.305226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-25 17:09:30.305237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.305489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.305512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.305969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.305979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.306417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.306427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.306867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.306878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.307111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.307125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.307643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.307658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.308029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.308038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.308491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.308501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.308938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.308949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.309402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.309415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.309893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.309904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.310471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.310486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.310926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.310936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.311390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.311401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.311840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.311850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.312228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.312251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.312718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.312727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.313168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.313178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.313475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.313485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.313994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.314004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.314486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.314496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.314849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.314859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.315432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.315796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.315809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.316164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.316175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.316638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.316651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.317112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.317122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.317448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.317459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.317907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.317917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.318372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.318386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.318911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.318921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.319254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.319269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.319715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.319726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.320212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.320222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.320673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.320684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.321160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.321169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.321615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-25 17:09:30.321632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-25 17:09:30.322070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.322080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.322522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.322535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.323005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.323016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.323497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.323508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.323985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.323995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.324434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.324444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.324884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.324893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.325330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.325341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.325806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.325816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.326265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.326276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.326716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.326726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.327192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.327216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.327679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.327690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.328212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.328224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.328600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.328610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.329002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.329012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.329571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.329611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.329992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.330004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.330441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.330480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.330848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.330861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.331099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.331109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.331618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.331630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.332068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.332078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.332398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.332422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.332903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.332912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.333141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.333150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.333552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.333564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.334023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.334033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.334470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.334485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.334927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.334938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.335377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.335389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.335704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.335716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.336163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.336173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.336617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.336627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.337089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.337100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.337572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.337582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.338024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.338035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.338501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-07-25 17:09:30.338512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-07-25 17:09:30.338950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.338961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.339495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.339506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.339821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.339832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.340076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.340086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.340557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.340568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.341003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.341014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.341502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.341511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.341884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.341893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.342493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.342533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.343032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.343044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.343574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.343613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.344085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.344099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.344579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.344591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.345071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.345083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.345562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.345574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.346033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.346045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.346643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.346684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.347153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.347167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.347787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.347829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.348312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.348328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.348758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.348771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.349409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.349450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.349958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.349973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.350437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.350450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.350941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.350953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.351472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.351485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.351960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.351973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.352520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.352562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.353024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.353039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.353581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.353596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.353966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-07-25 17:09:30.353978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-07-25 17:09:30.354550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.354591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.355057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.355072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.355731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.355772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.356407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.356448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.356908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.356921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.357476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.357516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.357996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.358010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.358566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.358607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.359073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.359087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.359578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.359593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.360096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.360109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.360585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.360599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.361075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.361087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.361565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.361577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.362060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.362072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.362511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.362553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.362987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.363001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.363466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.363479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.363949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.363962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.364267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.364281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.364783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.364796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.365388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.365429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.365901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.365916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.366396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.366408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.366900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.366913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.367152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.367171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.367611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.367624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.368108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.368125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.368592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.368605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.369071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.369083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.369546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.369558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.369863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.369875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.370363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.370375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.370835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.370846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.371297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.371309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.371769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.371780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.372271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.372284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.372736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-07-25 17:09:30.372747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-07-25 17:09:30.373110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.373121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.373667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.373680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.374132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.374143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.374601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.374613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.375095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.375107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.375617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.375629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.376083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.376094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.376585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.376598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.377047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.377058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.377529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.377571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.378080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.378095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.378572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.378586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.379048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.379060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.379523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.379536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.379993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.380006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.380564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.380605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.381074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.381094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.381576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.381588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.382048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.382060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.382593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.382634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.383118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.383134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.383677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.383717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.384195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.384220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.384689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.384701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.385158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.385171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.385721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.385762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.386402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.386442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.386678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.386691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.387183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.387194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.387670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.387683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.388136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.388148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.388605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.388619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.389077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.389089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.389563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.389575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.390055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.390068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.390405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.390416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.390752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.390764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.391245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.391256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-07-25 17:09:30.391722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-07-25 17:09:30.391735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.392214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.392225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.392699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.392711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.393113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.393124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.393582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.393593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.394077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.394089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.394270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.394282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.394748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.394762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.395225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.395238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.395453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.395463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.395936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.395948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.396087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.396098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.396430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.396441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.396675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.396685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.397069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.397082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.397525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.397537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.397985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.397997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.398457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.398469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.398926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.398937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.399391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.399412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.399884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.399895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.400351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.400362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.400824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.400836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.401316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.401328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.401822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.401833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.402333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.402345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.402796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.402807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.403263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.403274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.403737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.403749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.404231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.404242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.404691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.404703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.405163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.405174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.405658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.405672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.406165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.406177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.406635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.406646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.407076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.407088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.407547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.407559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.408022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.408033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-07-25 17:09:30.408501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-07-25 17:09:30.408514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.408958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.408970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.409427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.409439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.409915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.409928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.410485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.410525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.411072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.411087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.411545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.411557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.412018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.412030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.412625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.412668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.413144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.413159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.413727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.413766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.414418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.414457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.414901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.414914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.415468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.415507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.415973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.415986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.416574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.416612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.417076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.417090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.417471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.417485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.417932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.417943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.418617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.418656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.419134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.419148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.419602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.419615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.419864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.419882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.420347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.420359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.420855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.420867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.421324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.421336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.421824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.421836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.422184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.422196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.422650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.422661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.423121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.423133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.423585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.423597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.424057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.424068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-07-25 17:09:30.424519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-07-25 17:09:30.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.425015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.425026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.425607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.425646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.426120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.426140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.426699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.426738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.427212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.427228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.427687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.427699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.428175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.428188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.428736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.428775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.429430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.429470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.429962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.429975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.430548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.430587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.431052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.431065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.431523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.431538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.432015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.432027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.432623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.432662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.433154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.433168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.433635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.433649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.434144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.434156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.434739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.434779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.435276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.435292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.435747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.435760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.436240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.436252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.436729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.436741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.437064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.437077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.437554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.437566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.438022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.438034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.438492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.438504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.438985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.438998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.439545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.439583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.440052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.440071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.440625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.440664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.441125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.441140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.441683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.441722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.442211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.442227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.442476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.442487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.442850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.442861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.443469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.443508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-07-25 17:09:30.443984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-07-25 17:09:30.443997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.444553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.444591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.445090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.445104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.445598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.445613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.445983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.445992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.446482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.446502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.446716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.446726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.447175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.447185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.447623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.447634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.448073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.448083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.448405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.448416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.448777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.448787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.449225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.449236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.449694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.449705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.450170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.450181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.450529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.450540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.450890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.450900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.451112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.451436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.451447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.451906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.451916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.452395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.452405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.452870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.452884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.453304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.453315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.453750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.453760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.454189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.454205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.454651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.454661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.455141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.455152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.455602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.455612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.456056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.456066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.456618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.456656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.457018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.457031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.457573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.457611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.458093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.458105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.458563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.458579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.459069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.459079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.459580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.459592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.460025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.460034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.460554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.460592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-07-25 17:09:30.461095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-07-25 17:09:30.461107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.461567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.461580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.462108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.462118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.462747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.462784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.463277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.463291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.463720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.463732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.464236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.464246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.464581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.464592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.465048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.465059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.465493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.465503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.465977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.465988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.466534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.466545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.467001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.467013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.467637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.467675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.468143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.468158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.468711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.468750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.469181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.469196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.469669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.469681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.470029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.470040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.470511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.470549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.471022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.471035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.471523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.471561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.472039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.472056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.472560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.472599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.473078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.473091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.473647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.473685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.474096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.474109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.474649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.474663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.475125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.475136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.475504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.475517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.475981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.475993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.476520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.476534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.477001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.477014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.477267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.477289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.477749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.477762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.478218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.478229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.478685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.478696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.479036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.479048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-07-25 17:09:30.479413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-07-25 17:09:30.479425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.479878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.479891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.480345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.480357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.480806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.480818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.481275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.481288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.481884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.481896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.482266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.482703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.482714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.483172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.483185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.483653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.483664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.484153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.484164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.484607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.484621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.485076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.485087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.485566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.485577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.486033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.486045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.486616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.486655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.487099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.487114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.487565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.487580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.488032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.488044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.488643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.488683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.489039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.489053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.489555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.489570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.490061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.490073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.490434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.490447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.490826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.490838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.491322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.491334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.491831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.491843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.492301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.492312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.492646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.492657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.493042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.493053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.493515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.493527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.494004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.494014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.494448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.494460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.494791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.494802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.495279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.495290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.495666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.495677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.496137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.496149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-07-25 17:09:30.496605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-07-25 17:09:30.496617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.497077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.497090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.497571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.497582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.498067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.498080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.498551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.498562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.499000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.499011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.499578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.499617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.500038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.500052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.500620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.500659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.501145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.501159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.501691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.501729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.502195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.502216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.502670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.502683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.503135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.503148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.503779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.503818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.504298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.504313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.504668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.504681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.505013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.505026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.505538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.505584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.506054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.506068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.506607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.506619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.507102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.507114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.507599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.507611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.508084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.508095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.508560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.508571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.509050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.509062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.509616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.509655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.509995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.510010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.510482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.510521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.510990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.511009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.511468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.511507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.511971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.511984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.512530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.512569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-07-25 17:09:30.513046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-07-25 17:09:30.513060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.513627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.513666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.514130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.514143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.514554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.514593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.515058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.515070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.515710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.515748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.516235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.516250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.516708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.516721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.517234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.517246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.517694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.517710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.518163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.518637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.518648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.519157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.519168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.519628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.519640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.520095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.520106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.520555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.520566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.521025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.521037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.521495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.521508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.521987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.521999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.522437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.522450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.522905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.522917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.523408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.523447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.523913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.523932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.524403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.524442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.524927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.524940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.525489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.525527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.525993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.526006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.526556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.526596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.527060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.527074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.527619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.527657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.528131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.528144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.528704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.528743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.529216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.529230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.529687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.529701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.530156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.530168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.530807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.530846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.531327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.531347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.531787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.531800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-07-25 17:09:30.532405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-07-25 17:09:30.532444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.532821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.532837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.533295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.533762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.533774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.534291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.534302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.534726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.534740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.535206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.535218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.535700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.535711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.536238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.536251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.536699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.536710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.537153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.537165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.537613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.537625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.537949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.537962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.538556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.538595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.539059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.539074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.539627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.539666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.540137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.540152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.540707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.540746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.541375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.541414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.541879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.541892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.542469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.542508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.542991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.543005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.543486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.543501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.543959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.543971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.544418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.544429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.544918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.544933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.545388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.545400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.545809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.545821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.546297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.546308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.546784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.546796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.547250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.547261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.547706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.547719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.548170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.548181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-07-25 17:09:30.548646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-07-25 17:09:30.548658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.548986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.549000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.549455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.549467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.549917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.549928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.550499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.550537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.551000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.551015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.551562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.551601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.552075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.552090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.552448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.552460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.552789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.552800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.553267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.553279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.553586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.553598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.554058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.554069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.554453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.554465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.554930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.554941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.555402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.555414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.555859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.555870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.556497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.556535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.557015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.557028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.557605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.557643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-25 17:09:30.558110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-25 17:09:30.558123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.558463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.558480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.558932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.558944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.559511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.559556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.560048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.560061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.560540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.560553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.561009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.561024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.561481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.561494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.561972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.561984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.562536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.562574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.562920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.562936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.563485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.563523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.563995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.564008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.564586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.564629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.565107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.565124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.565585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.565598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.566048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.566059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.566629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.566668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.567007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.567021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.567578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.567616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.568092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.568105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.568572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.568585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.569051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.569063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.569645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.569683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.570172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.570186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.570686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.570700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.571186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.571197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.571653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.571667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.572029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.572041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.572517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.572528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.573001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.573013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.573555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.573568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.574015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.574027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.574473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.574485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.575010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.575022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.575471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.575483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.575940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.575952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-25 17:09:30.576580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-25 17:09:30.576620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.577106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.577119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.577735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.577774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.578417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.578460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.578939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.578952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.579209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.579233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.579710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.579722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.580205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.580217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.580761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.580799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.581260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.581276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.581699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.581712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.581946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.581963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.582429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.582441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.582918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.582930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.583379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.583392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.583850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.583862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.584289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.584301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.584801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.584814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.585279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.585290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.585766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.585777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.586257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.586268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.586726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.586737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.587213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.587224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.587678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.587689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.588151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.588163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.588613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.588624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.589079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.589091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.589565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.589577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.590052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.590063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.590621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.590659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.591024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.591049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.591698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.591736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.592204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.592220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.592688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.592699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.593030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.593041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.593592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.593630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.594109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-25 17:09:30.594122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-25 17:09:30.594663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.594701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.595164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.595177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.595630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.595669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.596025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.596038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.596474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.596518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.596865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.596878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.597334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.597346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.597805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.597818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.598274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.598286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.598654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.598671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.599040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.599052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.599507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.599518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.599994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.600006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.600462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.600476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.600996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.601008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.601463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.601476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.601931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.601943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.602480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.602519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.602993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.603008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.603552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.603591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.604059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.604078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.604556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.604569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.605024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.605035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.605588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.605627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.606104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.606118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.606574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.606588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.607044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.607055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.607633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.607673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.608139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.608152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.608601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.608640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.609116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.609129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.609592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.609606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.610081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.610092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.610603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.610616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.611063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.611075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.611561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.611575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.611899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-25 17:09:30.611913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-25 17:09:30.612378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.612390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.612848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.612860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.613405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.613443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.613819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.613834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.614191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.614210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.614661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.614672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.615125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.615138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.615672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.615683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.616124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.616136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.616581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.616619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.617131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.617146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.620779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.620817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.621385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.621423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.621902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.621916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.622377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.622390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.622844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.622857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.623428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.623467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.623945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.623961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.624418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.624431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.624885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.624897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.625378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.625391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.625858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.625870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.626425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.626464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.626940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.626955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.627523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.627565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.628031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.628046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.628489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.628527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.629034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.629052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.629494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.629534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.630012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.630026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.630580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.630620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.631068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.631081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.631560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.631574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.632054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.632066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.632644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.632684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.633177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.633190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.633731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.633770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.634104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-25 17:09:30.634117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-25 17:09:30.634689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.634728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.635209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.635224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.635695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.635709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.636188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.636204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.636653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.636667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.637142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.637154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.637604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.637617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.638078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.638091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.638569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.638581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.639023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.639037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.639489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.639502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.639957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.639969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.641001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.641029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.641572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.641615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.642086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.642471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.642484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.642930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.642941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.643225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.643246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.643603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.643616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.644076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.644087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.644544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.644557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.645162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.645184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.645663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.645677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.646072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.646083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.646534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.646547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.647005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.647017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.647492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.647504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.647974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.647985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.648465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.649016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.649028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.649492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.649503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.649964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.574 [2024-07-25 17:09:30.649975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.574 qpair failed and we were unable to recover it. 00:30:10.574 [2024-07-25 17:09:30.650333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.650345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.650768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.650780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.651141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.651152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.651616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.651627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.652101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.652112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.652564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.652575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.652902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.652913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.653370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.653382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.653836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.653850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.654310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.654322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.654661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.654671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.655130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.655142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.655570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.655582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.656066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.656519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.656530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.656987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.656999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.657451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.657463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.657800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.657811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.658292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.658304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.658788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.658799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.659300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.659313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.659767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.659779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.660030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.660047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.660538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.660552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.661015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.661025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.661582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.661621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.662098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.662112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.662591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.662604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.663053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.663064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.663438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.663451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.663909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.663920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.664515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.664554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.665019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.665033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.665579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.665617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.666093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.666106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.666642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.666654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.667023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.575 [2024-07-25 17:09:30.667035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.575 qpair failed and we were unable to recover it. 00:30:10.575 [2024-07-25 17:09:30.667556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.667594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.668065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.668079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.668531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.668543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.669871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.669895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.670461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.670501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.670970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.670983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.671632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.671671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.672140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.672153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.672721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.672760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.673234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.673258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.673707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.673720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.674381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.674405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.674930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.674946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.675389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.675401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.675873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.675885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.676355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.676366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.676818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.676829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.677302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.677313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.677771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.677782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.678262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.678274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.678735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.678746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.679210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.679222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.679672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.679684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.680357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.680379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.680837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.680849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.681324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.681336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.681820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.681831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.682289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.682301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.682772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.682784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.683262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.683273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.683739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.683749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.684241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.684255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.684716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.684727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.685143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.685154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.685501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.685513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.685966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.685977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.686422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.686432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.686907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.576 [2024-07-25 17:09:30.686918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.576 qpair failed and we were unable to recover it. 00:30:10.576 [2024-07-25 17:09:30.687391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.687402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.687862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.687875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.688337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.688347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.688826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.688837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.689292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.689303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.690007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.690029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.690522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.690534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.691012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.691027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.691475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.691486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.691966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.691976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.692527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.692565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.693041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.693053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.693598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.693636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.694111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.694124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.694637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.694648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.695117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.695128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.695606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.695643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.696125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.696138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.696512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.696523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.697003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.697013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.697587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.697625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.697945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.697957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.698514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.698551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.698907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.698920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.699433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.699444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.699919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.699929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.700454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.700491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.700978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.700990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.701551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.701593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.702065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.702078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.702573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.702583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.703284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.703306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.703844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.703854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.704227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.704237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.704698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.704707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.705182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.705192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.705563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.705573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.706047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.577 [2024-07-25 17:09:30.706057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.577 qpair failed and we were unable to recover it. 00:30:10.577 [2024-07-25 17:09:30.706628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.706665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.707047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.707059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.707635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.707672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.708385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.708423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.708933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.708946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.709515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.709552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.710104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.710116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.710399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.710886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.710896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.711380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.711390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.711828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.711839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.712332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.712343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.712823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.712833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.713306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.713316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.713812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.713822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.714306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.714317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.714815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.714825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.715306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.715316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.715797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.715808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.716297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.716307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.716787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.716797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.717119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.717130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.717588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.717598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.717960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.717969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.718429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.718441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.718919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.718930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.719403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.719441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.719942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.719954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.720512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.720550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.721030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.721043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.721529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.721567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bd220 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.721964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.721986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.578 [2024-07-25 17:09:30.722382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.578 [2024-07-25 17:09:30.722390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.578 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.722744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.722752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.723231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.723239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.723666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.723673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.724006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.724013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.724379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.724387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.724871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.724878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.725360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.725369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.725848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.725855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.726335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.726342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.726653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.726663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.727115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.727122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.727556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.727566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.727999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.728006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.728495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.728502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.728740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.728747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.729222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.729229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.729569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.729576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.729916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.729923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.730445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.730452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.730897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.730903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.731344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.731896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.731903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.732256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.732263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.732763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.732770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.733234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.733241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.733704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.733711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.734149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.734156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.734504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.734513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.734937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.734943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.735403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.735410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.735883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.735890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.736347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.736355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.736706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.736713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.737053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.737060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.737536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.737543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.737934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.737940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.579 [2024-07-25 17:09:30.738405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.579 [2024-07-25 17:09:30.738412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.579 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.738850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.738857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.739321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.739328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.739770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.739776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.739992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.740003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.740472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.740481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.740930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.740937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.741374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.741382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.741865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.741872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.742294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.742301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.742777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.742784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.743245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.743251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.743473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.743482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.743987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.743994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.744436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.744443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.744905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.744911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.745360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.745367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.745847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.745854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.746171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.746179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.746654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.746660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.747125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.747132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.747489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.747496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.747950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.747957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.748282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.748289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.748718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.748724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.749182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.749188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.749627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.749633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.750145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.750152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.750702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.750709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.751117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.751125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.751584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.751591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.752096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.752103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.752417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.752423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.752830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.752837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.753191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.753198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.753677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.753685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.754088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.754095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-07-25 17:09:30.754574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.580 [2024-07-25 17:09:30.754581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.754879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.754886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.755314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.755321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.755786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.755793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.756273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.756280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.756744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.756753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.757190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.757197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.757648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.757655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.758171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.758178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.758633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.758641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.759102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.759109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.759530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.759537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.759975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.759982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.760506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.760534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.760757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.760768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.761277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.761285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.761641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.761648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.762098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.762105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.762479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.762486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.762980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.762987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.763427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.763434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.763869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.763876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.764351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.764359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.764804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.764810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.765251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.765258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.765717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.765724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.766157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.766164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.766613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.766621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.767075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.767082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.767532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.767559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.767791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.767799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.768278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.768285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.768759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.768766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.769210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.769217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.769658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.769664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.770103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.770110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.770569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.770577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-07-25 17:09:30.771036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-07-25 17:09:30.771043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.771498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.771505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.771940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.771946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.772503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.772531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.772983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.772991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.773520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.773548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.773997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.774006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.774555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.774582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.775032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.775044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.775391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.775400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.775736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.775743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.776218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.776233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.776735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.776741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.777248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.777255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.777579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.777586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.778037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.778043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.778493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.778499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.778973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.778980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.779564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.779590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.780042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.780503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.780530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.780993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.781001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.781456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.781483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.781925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.781933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.782519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.782547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.783007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.783017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.783622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.783651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.784134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.784144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.784694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.784722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.785177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.785186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.785749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.785777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.786411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.786439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.786800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.786809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.787415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.787443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.787912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.787922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.788410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.788419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.788884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.788892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.789459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-07-25 17:09:30.789487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-07-25 17:09:30.789979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.789989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.790550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.790577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.791028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.791036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.791493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.791521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.791991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.792000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.792508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.792537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.792870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.792879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.793457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.793484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.793927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.793936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.794478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.794505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.794959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.794970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.795570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.795597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.795830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.795842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.796302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.796311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.796774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.796781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.796990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.797001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.797437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.797444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.797878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.797885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.798363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.798370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.798706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.798713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.799172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.799179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.799634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.799641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.799991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.799999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.800414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.800442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.800895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.800903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.801384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.801391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.801743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.801750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.802242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.802248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.802721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.802728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.803174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.803180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.803620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.803627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.804065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.804073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-07-25 17:09:30.804519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-07-25 17:09:30.804527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.804966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.804973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.805469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.805497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.805954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.805962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.806503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.806531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.806985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.806993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.807501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.807529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.807974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.807982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.808512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.808540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.808989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.808997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.809605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.809633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.809996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.810005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.810457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.810485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.810935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.810944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.811493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.811520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.812057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.812066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.812604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.812631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.813084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.813093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.813617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.813628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.813837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.813849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.814312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.814320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.814767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.814774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.815212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.815219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.815651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.815658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.816092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.816099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.816611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.816617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.817053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.817060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.817414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.817422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.817861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.817868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.818415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.818443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.818893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.818902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.819349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.819357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.819814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.819822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.820276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.820283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.820730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.820736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.821174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.821181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.821629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.821636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-07-25 17:09:30.822071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-07-25 17:09:30.822078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.822586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.822614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.822999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.823007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.823539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.823566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.823928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.823937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.824510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.824538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.824991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.824999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.825559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.825586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.826028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.826037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.826488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.826515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.826965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.826974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.827503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.827531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.827985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.827993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.828572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.828599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.829105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.829114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.829570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.829577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.830015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.830022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.830575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.830603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.831048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.831057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.831540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.831568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.832020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.832029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.832614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.832646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.833105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.833113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-07-25 17:09:30.833660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-07-25 17:09:30.833688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.834137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.834147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.834670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.834699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.835166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.835175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.835633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.835661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.836290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.836305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.836784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.836791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.837233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.837241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.837858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.837871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.838318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.838326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.838762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.838770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.839361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.839374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.839813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.839820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.840268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.840275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.840772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.840779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.841171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.841178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.841545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.841553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.842035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.842042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.842280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.842287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.842733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.842739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.843175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-25 17:09:30.843181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-25 17:09:30.843627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.843635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.844072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.844079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.844617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.844644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.845094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.845102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.845572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.845581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.845945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.845953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.846504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.847010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.847018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.847571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.847599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.848049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.848057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.848598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.848625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.849084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.849092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.849538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.849565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.850041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.850050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.850646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.850674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.851130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.851139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.851560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.851588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.851807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.851821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.852291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.852300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.852777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.852784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.853142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.853148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.853610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.853617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.854108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.854115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.854323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.854333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.854815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.855270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.855276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.855595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.855602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.856060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.856067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.856515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.856522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.857039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.857045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.857566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.857593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.858045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.858053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.858607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.858634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.859090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.859099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.859451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.859458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.859904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.859911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-25 17:09:30.860537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-25 17:09:30.860566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.861577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.861595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.862027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.862035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.862570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.862597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.863078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.863086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.863537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.863545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.863906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.863913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.864459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.864487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.864940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.864950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.865493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.865521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.865972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.865981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.866606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.866634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.867094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.867102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.867573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.867580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.868021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.868028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.868665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.868692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.869152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.869160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.869699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.869727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.870177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.870185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.870734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.870761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.871224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.871242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.871578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.871590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.872040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.872046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.872584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.872611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.873062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.873071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.873651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.873679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.874131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.874139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.874663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.874691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.875151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.875626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.875654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.876105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.876114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.876315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.876328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.876784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.876792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.877161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.877168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.877748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.877755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.878183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.878191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.878765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-25 17:09:30.878793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-25 17:09:30.879415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.879443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.879895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.879904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.880505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.880533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.880984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.880992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.881525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.881552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.882000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.882009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.882582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.882610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.883061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.883069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.883601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.883629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.884081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.884089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.884500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.884508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.884871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.884878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.885426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.885454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.885909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.885918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.886461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.886488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.886946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.886954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.887534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.887562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.888011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.888020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.888558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.888586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.889036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.889044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.889567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.889594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.890084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.890093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.890568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.890576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.891015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.891022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.891591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.891622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.892067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.892076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.892495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.892522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.892989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.892998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.893532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.893559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.894036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.894045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.894564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.894592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.895042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.895050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.895582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.895609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.896059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.896068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.896602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.896630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.897089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.897097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.897686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-25 17:09:30.897714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-25 17:09:30.898165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.898173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.898707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.898735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.899185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.899194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.899722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.899749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.900415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.900442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.900927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.900935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.901421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.901449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.901924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.901932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.902471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.902498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.902977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.902987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.903451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.903479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.903921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.903931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.904505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.904532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.904985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.904994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.905552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.905579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.906061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.906070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.906507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.906534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.906984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.906992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.907523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.907551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.907999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.908007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.908547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.908575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.908937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.908946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.909606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.909634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.910086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.910095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.910488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.910496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.911037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.911044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.911587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.911615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.912100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.912112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.912347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.912360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.912801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.912808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.913022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.913031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.913462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.913469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.913725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.913733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.914224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.914233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.914704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.914712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.915175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.915181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.915630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.915637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.916132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-25 17:09:30.916139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-25 17:09:30.916622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.916630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.917074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.917081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.917557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.917585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.918090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.918098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.918579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.918588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.919064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.919071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.919688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.919716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.920063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.920073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.920635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.920663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.921021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.921029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.921564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.921592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.921950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.921959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.922502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.922530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.922987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.922995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.923596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.923623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.924082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.924090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.924460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.924468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.924917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.924924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.925411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.925438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.925893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.925901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.926474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.926501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.926952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.926961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.927509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.927536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.927994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.928002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.928559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.928587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.929037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.929046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.929598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.929625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.930160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.930169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.930699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.930726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.931175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.931186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.931727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-25 17:09:30.931754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-25 17:09:30.932216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.932225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.932659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.932666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.933099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.933106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.933708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.933736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.934190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.934198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.934738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.934765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.935400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.935428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.935883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.935891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.936109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.936121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.936561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.936570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.937008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.937014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.937563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.937591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.938042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.938601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.938628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.939061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.939069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.939640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.939668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.940146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.940155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.940708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.940736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.941091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.941100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.941669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.941696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.942185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.942194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.942725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.942752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.943208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.943217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.943748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.943775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.944380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.944408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.944872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.944880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.945427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.945454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.945908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.945917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.946405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.946433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.946902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.946910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.947490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.947518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.947968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.947977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.948512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.948540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.949001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.949009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.949575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.949602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.950099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.950107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.950632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.950640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-25 17:09:30.951077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-25 17:09:30.951084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.951627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.951657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.951997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.952006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.952573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.952600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.953061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.953069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.953627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.953655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.954110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.954119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.954622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.954650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.955157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.955165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.955709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.955736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.956186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.956195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.956740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.956766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.957408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.957436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.957888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.957896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.958450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.958478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1618294 Killed "${NVMF_APP[@]}" "$@" 00:30:10.861 [2024-07-25 17:09:30.958930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.958939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.959403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.959431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:10.861 [2024-07-25 17:09:30.959880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.959889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:10.861 [2024-07-25 17:09:30.960424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:10.861 [2024-07-25 17:09:30.960452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.861 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.861 [2024-07-25 17:09:30.960931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.960939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.961380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.961388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.961854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.961861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.962412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.962440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.962804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.962812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.963288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.963296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.963747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.963757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.964217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.964224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.964650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.964657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.965106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.965113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.965567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.965574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.966011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.966018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.966495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.966504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.966959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.966967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.967539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.967568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-25 17:09:30.967932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-25 17:09:30.967943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1619206 00:30:10.862 [2024-07-25 17:09:30.968609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1619206 00:30:10.862 [2024-07-25 17:09:30.968638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1619206 ']' 00:30:10.862 [2024-07-25 17:09:30.969155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.969166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.862 [2024-07-25 17:09:30.969624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.969633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.862 [2024-07-25 17:09:30.970096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.970105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 17:09:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.862 [2024-07-25 17:09:30.970732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.970761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.971415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.971444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.971897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.971907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.972128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.972141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.972590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.972599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.973057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.973066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.973625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.973655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.973977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.973986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.974453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.974486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.974845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.974855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.975351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.975361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.975828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.975837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.976206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.976215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.976763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.976772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.977404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.977433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.977915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.977927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.978460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.978489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.978854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.978864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.979448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.979478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.979948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.979958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.980511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.980541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.981002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.981013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.981478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.981507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.981989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.982000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.982517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.982546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.983019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.983029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.983455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.983487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.983901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.983912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.984409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-25 17:09:30.984439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-25 17:09:30.984807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.984819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.985305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.985314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.985790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.985799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.986285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.986294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.986714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.986722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.987167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.987175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.987712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.987720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.988197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.988210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.988664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.988994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.989002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.989527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.989557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.990033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.990042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.990142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.990153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.990520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.990530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.990992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.991001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.991349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.991358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.991833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.991842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.992349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.992358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.992549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.992561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.993022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.993034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.993370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.993377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.993921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.993929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.994386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.994395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.994879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.994887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.995395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.995424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.995804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.995813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.996289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.996297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.996668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.996676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.997146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.997155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.997622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.997631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.997821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.997829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.998316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.998324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-25 17:09:30.998770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-25 17:09:30.998778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:30.999286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:30.999295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:30.999787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:30.999796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.000251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.000259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.000625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.000633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.001102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.001111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.001486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.001494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.001877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.001885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.002229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.002238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.002654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.002661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.002983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.002992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.003219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.003227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.003685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.003693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.004181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.004189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.004646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.004654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.005122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.005130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.005502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.005510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.006000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.006007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.006574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.006603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.006930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.006940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.007302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.007312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.007661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.007669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.008136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.008144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.008610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.008618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.009083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.009091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.009576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.009584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.009960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.009970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.010531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.010563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.011036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.011047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.011616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.011645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.012120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.012129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.012614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.013085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.013093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.013428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.013437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.013787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.013795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.014290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.014298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.014632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.014640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.014969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-25 17:09:31.014977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-25 17:09:31.015469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.015478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.015944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.015952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.016334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.016343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.016830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.016839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.017304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.017313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.017819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.017828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.018277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.018286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.018767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.019235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.019243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.019467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.019481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.019790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.019798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.020285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.020294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.020732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.020741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.021207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.021216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.021561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.021568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.022062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.022070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.022539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.022547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.023011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.023019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.023573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.023602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.023833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.023845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.024262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.024271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.024463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.024474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.024848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.024856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.025265] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:30:10.865 [2024-07-25 17:09:31.025310] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.865 [2024-07-25 17:09:31.025353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.025360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.025820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.025827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.026155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.026163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.026456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.026464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.026923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.026931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.027397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.027408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.027867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.027875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.028345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.028354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.028843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.028852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.029315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.029324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.029870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.029878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.030383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.030392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.030879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.030888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-25 17:09:31.031220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-25 17:09:31.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.031712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.031721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.032188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.032197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.032673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.032682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.033140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.033148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.033370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.033382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.033871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.033879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.034348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.034357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.034834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.034843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.035307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.035316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.035782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.035790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.036314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.036323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.036751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.036759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.037221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.037229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.037684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.037692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.038135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.038144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.038464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.038472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.038943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.038951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.039409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.039418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.039896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.039905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.040375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.040383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.040848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.040856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.041319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.041327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.041819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.041828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.042074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.042082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.042613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.042642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.043120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.043130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.043589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.043598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.044058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.044066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.044620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.044649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.045127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.045137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.045709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.045738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.046208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.046221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.046756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.046785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.047425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.047455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.047913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.047923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.048479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.048508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-25 17:09:31.048992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-25 17:09:31.049001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.049567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.049596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.050096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.050105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.050419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.050430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.050888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.050897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.051441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.051471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.051840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.051849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.052313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.052321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.052783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.052791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.053255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.053263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.053754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.053762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.054212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.054674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.054682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.055131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.055140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.055592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.055601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.055968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.055976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.056444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.056452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.056775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.056784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.057274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.057282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.867 [2024-07-25 17:09:31.057746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.057754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.058217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.058226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.058649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.058657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.059181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.059188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.059643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.059651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.060099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.060108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.060585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.060593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.061066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.061076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.061631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.061661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.061886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.061895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.062390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.062399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.062874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.062882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.063466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.063495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.063956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.063966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.064522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.064550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.065034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.065044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.065481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.065511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.065982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.065992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.066578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-25 17:09:31.066607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-25 17:09:31.067106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.067115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.067664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.067673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.068129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.068137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.068703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.068731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.069416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.069445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.069915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.069925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.070599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.070628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.070841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.070850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.071351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.071359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.071866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.071875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.072341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.072349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.072847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.072855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.073338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.073346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.073814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.073822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.074289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.074297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.074642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.074650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.075136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.075144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.075600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.075608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.076069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.076077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.076635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.076664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.077153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.077163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.077715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.077743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.078222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.078240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.078731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.078739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.079237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.079245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.079723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.079732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.080212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.080220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.080675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.080684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.081205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.081213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.081670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.081678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.081896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.081908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.082507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-25 17:09:31.082536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-25 17:09:31.082988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.082998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.083555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.083584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.084054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.084610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.084640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.085083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.085093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.085575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.085586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.085818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.085832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.086329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.086338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.086533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.086544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.086990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.086998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.087324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.087332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.087824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.087832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.088348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.088356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.088814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.088822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.089167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.089175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.089626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.089634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.090000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.090008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.090223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.090233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.090679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.090688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.091148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.091156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.091695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.091724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.092394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.092424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.092771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.092781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.093243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.093252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.093758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.093766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.094256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.094265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.094586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.095052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.095061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.095546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.095553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.095999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.096006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.096556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.096584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.097056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.097066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.097679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.097708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.098178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.098187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.098743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.098772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.099370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.099400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.099867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-25 17:09:31.099876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-25 17:09:31.100134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.100142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.100591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.100600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.101059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.101068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.101218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.101235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.101744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.101752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.101991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.101999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.102460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.102468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.102956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.102964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.103452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.103485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.103958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.103968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.104528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.104557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.105021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.105030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.105592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.105621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.106093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.106103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.106579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.106588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.106913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.106921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.107476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.107505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.107977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.107988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.108552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.108581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.109080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.109089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.109414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.109422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.109514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.870 [2024-07-25 17:09:31.109770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.109782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.110255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.110263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.110713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.110720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.111092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.111100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.111521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.111529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.111990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.111997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.112453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.112462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.112932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.112940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.113498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.114011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.114021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.114571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.114600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.115067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.115077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.115665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.115694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-25 17:09:31.116164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-25 17:09:31.116174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.871 qpair failed and we were unable to recover it. 00:30:10.871 [2024-07-25 17:09:31.116759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.871 [2024-07-25 17:09:31.116788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.871 qpair failed and we were unable to recover it. 00:30:10.871 [2024-07-25 17:09:31.117418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.871 [2024-07-25 17:09:31.117447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.871 qpair failed and we were unable to recover it. 00:30:10.871 [2024-07-25 17:09:31.117922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.871 [2024-07-25 17:09:31.117932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.871 qpair failed and we were unable to recover it. 00:30:10.871 [2024-07-25 17:09:31.118527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.871 [2024-07-25 17:09:31.118556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:10.871 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.119037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.119048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.119629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.119658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.120134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.120143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.120711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.120740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.121423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.121452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.121932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.121942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.122192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.122204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.122530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.122559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.123016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.123026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.123417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.123446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.123915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.123924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.124493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.124522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.124761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.124771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.125264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.125272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.125732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.125740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.126210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.126218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.126717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.126725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.127187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.127195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.127696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.127705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.128161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.128169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.128628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.128637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.129093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.129101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.129582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.129594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.130041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.130051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.130601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.130630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.131099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.131109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.131620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.131649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.132114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.132124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.132472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.132480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.132939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.141 [2024-07-25 17:09:31.133492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.141 [2024-07-25 17:09:31.133521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.141 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.133982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.133991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.134479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.134509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.134967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.134976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.135522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.135551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.135898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.135907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.136399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.136408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.136634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.136648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.137123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.137131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.137610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.137618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.137984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.137991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.138434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.138442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.138900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.138908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.139117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.139128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.139624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.139633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.140095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.140102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.140522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.140530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.140712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.140724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.141172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.141180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.141644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.141652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.141893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.141900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.142350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.142358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.142816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.142825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.143276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.143284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.143740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.143747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.144209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.144217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.144765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.144774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.145405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.145434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.145889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.145898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.146474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.146502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.146986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.146995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.147565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.148051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.148067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.148644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.148673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.149160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.149169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.149719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.149749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.150221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.142 [2024-07-25 17:09:31.150239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.142 qpair failed and we were unable to recover it. 00:30:11.142 [2024-07-25 17:09:31.150708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.150716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.151193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.151205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.151636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.151645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.152103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.152110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.152492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.152521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.152951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.152961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.153421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.153891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.153899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.154457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.154486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.154724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.154734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.155124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.155132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.155611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.155620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.156081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.156089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.156584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.156593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.157093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.157101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.157574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.157583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.158041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.158051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.158639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.158668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.159133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.159144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.159634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.159663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.160127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.160137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.160594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.160602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.161062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.161070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.161457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.161486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.161955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.161965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.162544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.162573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.163041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.163050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.163600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.163629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.164094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.164104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.164651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.164679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.165147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.165156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.165619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.165628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.166090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.166098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.166665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.166694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.167165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.167174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.167627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.167659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.168144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.168155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.143 qpair failed and we were unable to recover it. 00:30:11.143 [2024-07-25 17:09:31.168526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.143 [2024-07-25 17:09:31.168535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.168986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.168994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.169548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.169576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.169985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.169995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.170560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.170589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.171132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.171141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.171565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.171594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.172065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.172075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.172617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.172646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.173114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.173124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.173566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.173595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.174069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.174079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.174485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.174519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.174663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.144 [2024-07-25 17:09:31.174691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.144 [2024-07-25 17:09:31.174699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.144 [2024-07-25 17:09:31.174706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.144 [2024-07-25 17:09:31.174712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.144 [2024-07-25 17:09:31.174889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:11.144 [2024-07-25 17:09:31.174998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.175008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.175048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:11.144 [2024-07-25 17:09:31.175185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.144 [2024-07-25 17:09:31.175185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:11.144 [2024-07-25 17:09:31.175568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.175595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.176059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.176070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.176535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.176565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.177041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.177050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.177574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.177603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.178063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.178072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.178645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.178674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.179145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.179155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.179789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.179819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.180394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.180423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.180871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.180881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.181525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.181554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.182027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.182036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.182602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.182631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.183113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.183123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.183686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.183716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.184193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.184219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.184611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.184620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.184875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.184883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.185197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.144 [2024-07-25 17:09:31.185208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.144 qpair failed and we were unable to recover it. 00:30:11.144 [2024-07-25 17:09:31.185660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.185689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.186157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.186168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.186744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.187408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.187437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.187802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.187811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.188146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.188154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.188486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.188495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.188710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.188723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.188836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.188847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.189308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.189316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.189794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.189803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.190262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.190271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.190487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.190497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.190962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.190970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.191400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.191408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.191869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.191878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.192337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.192344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.192739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.192747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.193170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.193179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.193393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.193403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.193870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.193879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.194333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.194341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.194790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.194798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.195256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.195263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.195486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.195497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.195962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.195971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.196188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.196198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.196665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.196673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.197136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.197144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.197423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.197431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.197891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.145 [2024-07-25 17:09:31.197899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.145 qpair failed and we were unable to recover it. 00:30:11.145 [2024-07-25 17:09:31.198356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.198365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.198901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.198909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.199358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.199365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.199686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.199694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.200152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.200160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.200490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.200498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.200821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.200828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.201141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.201149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.201616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.201624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.202077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.202084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.202489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.202521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.203001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.203011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.203560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.203588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.204058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.204068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.204624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.204653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.205136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.205145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.205555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.205584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.206053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.206062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.206619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.206648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.206918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.206927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.207483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.207512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.207979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.207988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.208374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.208403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.208701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.208711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.209178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.209186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.209420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.209428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.209735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.209743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.210220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.210229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.210694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.210702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.211105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.211113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.211431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.211439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.211926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.211934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.212393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.212401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.212859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.212868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.213326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.213334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.213780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.213787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.214240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.214248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.146 qpair failed and we were unable to recover it. 00:30:11.146 [2024-07-25 17:09:31.214708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.146 [2024-07-25 17:09:31.214716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.215175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.215184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.215666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.215674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.216138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.216146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.216600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.216609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.217056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.217064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.217592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.217622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.218101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.218110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.218672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.218701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.219173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.219183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.219766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.219796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.220126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.220134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.220599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.220608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.220943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.220955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.221548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.221577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.222049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.222058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.222625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.222654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.223131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.223140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.223505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.223532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.224014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.224024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.224509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.224538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.225014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.225024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.225617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.225648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.225759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.225769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.226104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.226113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.226601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.226610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.227093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.227101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.227335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.227343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.227634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.227642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.228138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.228147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.228537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.228546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.229021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.229029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.229501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.229508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.229978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.229987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.230579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.230608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.231053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.231064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.231521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.231549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.232063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.147 [2024-07-25 17:09:31.232072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.147 qpair failed and we were unable to recover it. 00:30:11.147 [2024-07-25 17:09:31.232642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.232672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.233141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.233151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.233703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.233731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.234207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.234216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.234650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.234680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.234987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.234996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.235577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.235606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.236078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.236088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.236410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.236421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.236879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.236887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.237461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.237490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.238011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.238021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.238241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.238255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.238481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.238707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.238717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.239010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.239024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.239237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.239248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.239700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.239708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.240110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.240118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.240566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.240574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.241060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.241068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.241621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.241650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.242124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.242134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.242512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.242520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.243008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.243017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.243598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.243627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.243858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.243868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.244223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.244232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.244609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.244618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.245081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.245090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.245566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.245575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.245908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.245916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.246350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.246358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.246825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.246833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.247198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.247209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.247680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.247688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.248168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.248176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.148 qpair failed and we were unable to recover it. 00:30:11.148 [2024-07-25 17:09:31.248729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.148 [2024-07-25 17:09:31.248758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.249416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.249445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.249959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.249969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.250545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.250574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.251043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.251053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.251614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.251644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.251896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.251905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.252470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.252499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.252984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.252994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.253386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.253415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.253627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.253639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.253989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.253997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.254462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.254471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.254931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.254938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.255392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.255400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.255617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.255628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.255967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.255975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.256295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.256304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.256789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.256800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.257019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.257030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.257482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.257490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.257940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.257948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.258407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.258416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.258910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.258918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.259427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.259456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.259926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.259935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.260484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.260513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.260996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.261006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.261564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.261593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.262069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.262078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.262629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.262658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.263142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.263151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.263583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.263611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.264074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.264084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.264574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.264583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.265067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.265075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.265528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.265557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.149 qpair failed and we were unable to recover it. 00:30:11.149 [2024-07-25 17:09:31.265812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.149 [2024-07-25 17:09:31.265821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.266073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.266081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.266623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.266652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.267130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.267139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.267602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.267611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.267975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.267983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.268381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.268409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.268674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.268683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.269160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.269168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.269630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.269638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.269889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.269896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.270318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.270326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.270816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.270825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.271283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.271291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.271524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.271531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.271989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.271996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.272459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.272468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.272817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.272825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.273309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.273317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.273772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.273780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.274244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.274252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.274565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.274575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.274707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.275040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.275049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.275508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.275516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.275972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.275980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.276465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.276473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.276932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.276940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.277494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.277523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.277995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.278004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.278470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.278499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.278969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.278979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.150 [2024-07-25 17:09:31.279564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.150 [2024-07-25 17:09:31.279593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.150 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.280065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.280075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.280526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.280555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.281027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.281038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.281596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.281625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.281983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.281992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.282561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.282590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.283075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.283084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.283569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.283578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.284035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.284043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.284422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.284449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.284922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.284932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.285160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.285172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.285696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.285706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.285912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.285919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.286477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.286507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.286981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.286990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.287481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.287510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.287746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.287756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.288209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.288218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.288691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.288699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.289153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.289161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.289706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.289735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.290060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.290069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.290607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.290636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.291109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.291120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.291475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.291503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.291974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.291984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.292411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.292440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.292911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.292924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.293523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.293552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.294025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.294035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.294589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.294619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.295089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.295099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.295333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.295341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.295775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.295783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.296340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.151 [2024-07-25 17:09:31.296349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.151 qpair failed and we were unable to recover it. 00:30:11.151 [2024-07-25 17:09:31.296666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.296674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.297112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.297120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.297373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.297381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.297833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.297842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.298307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.298315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.298838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.298846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.299313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.299321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.299538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.299545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.299762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.299769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.300239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.300248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.300474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.300487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.300835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.300843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.301302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.301310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.301774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.301782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.302243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.302252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.302400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.302407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.302867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.302875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.303355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.303683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.303690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.304142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.304150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.304619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.304627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.305109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.305116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.305595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.305604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.306066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.306074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.306518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.306526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.307005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.307013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.307565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.307595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.307918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.307927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.308501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.308530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.308966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.308976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.309527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.309556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.309921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.309930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.310500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.310532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.310979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.310989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.311550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.311580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.312044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.312053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.312617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.312646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.312897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.312907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.313415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.313444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.313639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.313648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.313760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.313767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.314226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.314234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.314622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.314631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.315095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.315102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.315576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.315584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.316061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.316070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.316521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.316528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.316856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.316864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.317118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.317126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.317382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.317391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.317840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.317847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.318315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.318323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.318692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.318700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.319183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.319191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.319443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.319451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.319917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.319925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.320173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.320181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.320674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.152 [2024-07-25 17:09:31.320682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.152 qpair failed and we were unable to recover it. 00:30:11.152 [2024-07-25 17:09:31.321132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.321139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.321287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.321295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.321760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.321768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.322250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.322258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.322717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.322724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.323183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.323191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.323653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.323661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.324158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.324166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.324649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.324657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.325116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.325124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.325688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.325717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.326167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.326177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.326738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.326767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.327409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.327438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.327908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.328470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.328500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.328970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.328979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.329522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.329550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.329872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.329882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.330373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.330381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.330610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.330623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.330953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.330961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.331426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.331434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.331918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.331927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.332176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.332184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.332646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.332654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.332871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.332881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.333348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.333357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.333822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.333829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.334284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.334292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.334606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.334615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.335102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.335109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.335463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.335471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.335935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.335943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.336396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.336403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.336856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.336864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.337325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.337333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.337793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.337801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.338252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.338260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.338754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.338762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.339138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.339146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.339691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.339699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.339995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.340003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.340574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.340602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.340854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.340864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.341356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.341365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.341709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.341717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.342166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.342174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.342426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.342434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.342907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.342915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.343112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.343120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.343561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.343569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.344033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.344040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.344503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.344511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.344977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.344988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.345569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.345599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.345828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.345837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.346156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.346164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.346644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.346653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.347138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.347147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.347616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.347624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.348070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.348078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.348700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.348729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.349164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.153 [2024-07-25 17:09:31.349174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.153 qpair failed and we were unable to recover it. 00:30:11.153 [2024-07-25 17:09:31.349726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.349755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.349977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.349989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.350541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.350570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.350790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.350802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.351039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.351048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.351524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.351532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.351997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.352005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.352456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.352485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.352963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.352972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.353527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.353556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.354027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.354037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.354628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.354657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.355125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.355135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.355686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.355715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.356182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.356191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.356806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.356836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.357413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.357442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.357908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.357918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.358469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.358498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.358940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.358950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.359523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.359552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.359781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.359790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.360274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.360283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.360768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.360776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.361241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.361249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.361712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.361720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.362185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.362193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.362437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.362446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.362669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.362677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.363157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.363165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.363628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.363640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.363898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.363906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.364375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.364383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.364617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.364625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.364860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.364867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.365355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.365842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.365850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.366098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.366105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.366518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.366526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.366776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.366783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.366983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.366991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.367446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.367454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.367773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.367780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.368260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.368268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.368732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.368739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.368971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.368979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.369437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.369446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.369928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.369935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.370399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.370407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.370875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.370883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.371339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.371348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.371827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.371834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.372299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.372307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.372770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.372779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.373238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.373247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.373565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.374038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.374045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.374504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.374512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.374624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.374631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.375086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.375094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.375562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.375570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.376028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.376035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.376493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.376501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.376946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.376954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.377596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.377625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.378099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.378109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.378587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.378595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.379071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.379079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.379655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.379684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.379908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.379921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.380490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.154 [2024-07-25 17:09:31.380522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.154 qpair failed and we were unable to recover it. 00:30:11.154 [2024-07-25 17:09:31.380741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.380753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.381233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.381242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.381568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.381576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.382041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.382050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.382532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.382539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.382998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.383006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.383557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.383586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.384060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.384070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.384654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.384683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.385152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.385162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.385752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.385781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.386408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.386437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.386691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.386701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.387176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.387184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.387418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.387426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.387648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.387656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.388148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.388157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.388390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.388399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.388869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.388877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.389334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.389342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.389819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.389827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.390287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.390295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.390756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.390764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.391230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.391238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.391735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.391743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.392193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.392213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.392672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.392680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.392999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.393006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.393539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.393569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.394036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.394045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.394605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.394634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.394857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.394869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.395343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.395352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.395678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.395688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.395944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.395952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.396412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.396420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.396782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.396790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.397255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.397263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.397730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.397737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.398058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.398071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.398554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.398562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.398885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.398893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.399254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.399262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.399754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.399761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.400091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.400098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.400588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.400595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.401026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.401033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.401393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.401402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.401867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.401875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.402469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.402498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.402967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.402977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.403580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.403608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.404099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.404108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.155 [2024-07-25 17:09:31.404594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.155 [2024-07-25 17:09:31.404602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.155 qpair failed and we were unable to recover it. 00:30:11.425 [2024-07-25 17:09:31.405066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.425 [2024-07-25 17:09:31.405076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.425 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.405560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.405589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.405910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.405921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.406480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.406510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.406979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.406989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.407570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.407599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.407848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.407858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.408278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.408286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.408508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.408516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.408765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.408774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.408988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.408996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.409456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.409464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.409925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.409934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.410397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.410405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.410659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.410667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.411123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.411131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.411531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.411539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.411790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.411798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.412292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.412300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.412745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.412753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.413206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.413214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.413679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.413686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.414167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.414174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.414625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.414633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.415095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.415102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.415584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.415592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.416080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.416087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.416314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.416323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.416801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.416808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.417274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.417282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.417774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.417781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.418007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.418015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.418270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.418278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.418625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.418633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.419105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.419113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.419586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.419593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.426 qpair failed and we were unable to recover it. 00:30:11.426 [2024-07-25 17:09:31.419944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.426 [2024-07-25 17:09:31.419952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.420261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.420269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.420760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.420768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.421220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.421228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.421685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.421692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.422159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.422167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.422644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.422652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.423110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.423117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.423595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.423604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.423953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.423960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.424547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.424575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.425087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.425096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.425574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.425583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.426050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.426058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.426584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.426613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.427083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.427092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.427316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.427332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.427654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.427662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.428111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.428118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.428595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.428604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.429054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.429062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.429613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.429642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.430129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.430139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.430691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.430700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.431161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.431169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.431717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.431745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.431967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.431979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.432432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.432441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.432639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.432650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.433119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.433127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.433223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.433230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.433673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.433681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.434219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.434228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.434688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.434695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.435176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.435184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.435636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.435645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.435878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.435885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.427 qpair failed and we were unable to recover it. 00:30:11.427 [2024-07-25 17:09:31.436347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.427 [2024-07-25 17:09:31.436355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.436842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.436850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.437315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.437323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.437788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.437796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.438340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.438349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.438593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.438601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.439080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.439087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.439591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.439598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.440143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.440150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.440606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.440614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.440859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.440867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.441369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.441377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.441747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.441754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.442219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.442227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.442642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.442650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.442907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.442915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.443147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.443155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.443614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.443622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.444082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.444089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.444569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.444579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.444903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.444911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.445390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.445398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.445857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.445865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.446110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.446118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.446343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.446352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.446794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.446801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.447262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.447270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.447744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.447752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.448002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.448010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.448458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.448465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.448691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.448703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.449018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.449027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.449498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.449506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.449997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.450005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.450469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.450477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.450705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.450713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.428 [2024-07-25 17:09:31.451178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.428 [2024-07-25 17:09:31.451185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.428 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.451664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.451672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.452154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.452162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.452722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.452750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.453106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.453115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.453562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.453570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.454026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.454034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.454591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.454620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.455095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.455105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.455579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.455588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.456057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.456066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.456612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.456641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.457001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.457011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.457557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.457586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.458058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.458068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.458623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.459118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.459129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.459681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.459710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.460178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.460187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.460752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.460781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.461409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.461438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.461926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.461935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.462495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.462523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.462992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.463005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.463384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.463413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.463867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.463876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.464457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.464487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.464955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.464964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.465525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.465554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.466039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.466048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.466611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.466641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.467110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.467120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.467673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.467701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.468150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.468160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.468390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.468398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.468599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.468606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.469092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.469100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.429 [2024-07-25 17:09:31.469562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.429 [2024-07-25 17:09:31.469571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.429 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.470032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.470040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.470293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.470302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.470677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.470685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.470918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.470926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.471330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.471339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.471825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.471832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.472291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.472299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.472634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.472641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.473100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.473108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.473581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.473589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.474039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.474046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.474527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.474535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.474885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.474893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.475458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.475488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.475955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.475965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.476534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.476563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.477033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.477042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.477602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.477631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.478099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.478108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.478568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.478577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.479035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.479043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.479425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.479454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.479673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.479686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.479912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.479921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.480378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.480387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.480717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.480729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.481188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.481195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.481648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.481656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.481906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.481914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.430 [2024-07-25 17:09:31.482335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.430 [2024-07-25 17:09:31.482343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.430 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.482559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.482570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.483040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.483048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.483505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.483513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.483835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.483843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.484067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.484078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.484307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.484315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.484529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.484537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.485002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.485010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.485514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.485522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.485869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.485877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.486337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.486345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.486805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.486813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.487268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.487276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.487607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.487615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.488077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.488084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.488562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.488570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.489078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.489086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.489537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.489545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.490005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.490013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.490448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.490477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.491024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.491034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.491401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.491886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.491896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.492413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.492441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.492912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.492922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.493509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.493538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.494005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.494014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.494471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.494500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.494753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.494763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.494958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.494965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.495372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.495380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.495848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.495856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.496300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.496308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.496563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.496571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.497028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.497036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.497494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.431 [2024-07-25 17:09:31.497505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.431 qpair failed and we were unable to recover it. 00:30:11.431 [2024-07-25 17:09:31.497961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.497970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.498211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.498221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.498679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.498687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.499012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.499019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.499485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.499986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.499994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.500548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.501142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.501152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.501630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.501639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.502125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.502133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.502692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.502721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.503205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.503215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.503655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.503685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.504180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.504190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.504753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.504782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.505422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.505451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.505920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.505930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.506363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.506391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.506869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.506878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.507440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.507469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.507729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.507738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.508192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.508203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.508644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.508651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.509116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.509124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.509603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.509611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.509866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.509874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.510343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.510352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.510613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.510621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.510951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.510959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.511398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.511874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.511882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.512342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.512350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.512600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.512608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.513083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.513090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.513435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.513443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.513643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.513651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.432 [2024-07-25 17:09:31.514132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.432 [2024-07-25 17:09:31.514139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.432 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.514624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.514632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.514864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.514872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.515357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.515366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.515630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.515638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.515862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.515869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.516327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.516334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.516797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.516804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.517283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.517291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.517758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.517766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.518207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.518215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.518335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.518342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.518587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.518595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.519098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.519105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.519581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.519589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.520046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.520054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.520416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.520423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.520915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.520923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.521257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.521265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.521725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.521732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.522191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.522199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.522679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.522687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.523009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.523019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.523560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.523589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.524060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.524070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.524646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.524675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.525148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.525158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.525676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.525705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.526172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.526181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.526638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.526667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.527136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.527146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.527236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.527250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.527698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.527706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.527932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.527944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.528525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.528553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.529023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.529032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.529590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.433 [2024-07-25 17:09:31.529619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.433 qpair failed and we were unable to recover it. 00:30:11.433 [2024-07-25 17:09:31.530072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.530081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.530198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.530214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.530578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.530605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.531089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.531099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.531595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.531604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.531854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.531863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.532497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.532529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.533003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.533013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.533588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.533617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.534086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.534095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.534591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.534599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.535065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.535073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.535634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.535663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.536129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.536138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.536688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.536717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.537182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.537192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.537800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.537829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.538152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.538162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.538431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.538461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.538687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.538696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.539187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.539195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.539643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.539651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.540121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.540129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.540585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.540593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.541072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.541079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.541654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.542144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.542153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.542708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.542737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.543411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.543446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.543914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.543924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.544484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.544513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.544769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.544778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.545248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.545256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.545736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.545744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.546203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.546211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.546678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.546686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.547013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.434 [2024-07-25 17:09:31.547021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.434 qpair failed and we were unable to recover it. 00:30:11.434 [2024-07-25 17:09:31.547558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.547587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.548134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.548143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.548599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.548607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.549097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.549105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.549328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.549342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.549801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.549809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.550041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.550049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.550280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.550288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.550812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.550819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.551109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.551120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.551566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.551574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.551691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.551701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.552003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.552011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.552472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.552481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.552932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.552939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.553421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.553429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.553885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.553893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.554354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.554362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.554817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.554825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.555302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.555310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.555772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.555780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.556246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.556254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.556713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.556721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.557043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.557050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.557374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.557382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.557746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.557754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.558210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.558218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.558674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.558681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.559150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.559158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.559531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.559539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.435 [2024-07-25 17:09:31.560003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.435 [2024-07-25 17:09:31.560011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.435 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.560461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.560468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.560934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.560941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.561491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.561520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.561968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.561979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.562458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.562486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.562716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.562725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.563075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.563083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.563313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.563321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.563550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.563558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.564030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.564038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.564296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.564304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.564774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.564781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.565264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.565271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.565523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.565531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.566015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.566023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.566481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.566489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.566723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.566730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.567180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.567188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.567631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.567640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.567866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.567874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.568340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.568349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.568807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.568814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.569275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.569283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.569745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.569753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.570239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.570247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.570726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.570733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.571208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.571216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.571671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.571678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.572157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.572165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.572619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.572627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.572949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.572956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.573411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.573419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.573897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.573905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.574366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.574374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.574838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.574846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.436 [2024-07-25 17:09:31.575295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.436 [2024-07-25 17:09:31.575302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.436 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.575643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.575650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.576108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.576116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.576344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.576352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.576835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.576843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.577065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.577073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.577396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.577403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.577630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.577643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.577988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.577996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.578078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.578088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.578507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.578515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.578842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.578850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.579299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.579307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.579762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.579769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.580251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.580259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.580481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.580491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.580814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.580822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.580945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.580951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.581397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.581405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.581847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.581855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.582313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.582321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.582863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.582870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.583321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.583329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.583781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.583791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.584247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.584256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.584712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.584721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.585202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.585210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.585685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.585692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.585921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.585929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.586393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.586401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.586885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.586893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.587436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.587443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.587848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.587856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.588439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.588468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.588799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.588808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.589261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.589270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.589730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.437 [2024-07-25 17:09:31.589738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.437 qpair failed and we were unable to recover it. 00:30:11.437 [2024-07-25 17:09:31.590208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.590217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.590714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.590722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.591173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.591182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.591546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.591555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.592003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.592010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.592497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.592505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.592959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.592966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.593521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.593550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.594027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.594037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.594625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.594654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.594909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.594918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.595500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.595529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.596003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.596013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.596534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.596563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.596786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.596799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.597132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.597141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.597472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.597480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.597842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.597849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.598305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.598313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.598769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.598776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.599139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.599146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.599386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.599393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.599898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.599906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.600367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.600375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.600834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.600842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.601332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.601340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.601789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.601800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.602257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.602264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.602729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.602736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.603098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.603106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.603573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.603581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.604041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.604049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.604516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.604523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.604847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.604855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.605429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.605458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.605815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.605824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.438 [2024-07-25 17:09:31.606301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.438 [2024-07-25 17:09:31.606309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.438 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.606763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.606772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.607222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.607230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.607694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.607701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.608160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.608169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.608651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.608659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.609117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.609125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.609582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.609589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.610044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.610051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.610408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.610437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.610928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.610938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.611194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.611212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.611655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.611663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.612156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.612164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.612732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.612761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.612998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.613008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.613586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.613615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.614097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.614107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.614588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.614596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.615061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.615069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.615615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.615644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.615817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.615827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.616310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.616318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.616783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.616791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.617254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.617262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.617756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.617765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.618156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.618164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.618616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.618624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.618945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.618953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.619441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.619449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.619920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.619931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.620482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.620511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.620832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.620842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.621327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.621336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.621806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.621813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.622148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.622156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.439 [2024-07-25 17:09:31.622608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.439 [2024-07-25 17:09:31.622616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.439 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.623096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.623103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.623328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.623336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.623588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.623597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.624059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.624067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.624519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.624527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.624719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.624727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.625061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.625071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.625526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.625534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.626017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.626025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.626597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.626626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.627096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.627106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.627566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.627575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.628097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.628105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.628463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.628472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.628729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.628738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.629228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.629236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.629719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.629728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.630185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.630193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.630655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.630663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.631105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.631113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.631559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.631567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.631891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.631899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.632346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.632354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.632814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.632822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.633311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.633320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.633776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.633783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.634007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.634019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.634510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.634518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.634995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.635003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.635624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.635653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.636123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.636133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.440 qpair failed and we were unable to recover it. 00:30:11.440 [2024-07-25 17:09:31.636252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.440 [2024-07-25 17:09:31.636260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.636626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.636634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.637096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.637104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.637647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.637656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.637967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.637976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.638203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.638212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.638544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.638553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.638772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.638784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.639251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.639259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.639715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.639722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.640114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.640122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.640564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.640573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.641030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.641038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.641257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.641268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.641718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.641727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.642216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.642224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.642696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.642704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.643180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.643188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.643561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.643570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.644113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.644121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.644502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.644510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.644989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.644997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.645401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.645429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.645799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.645809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.646043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.646051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.646412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.646421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.646649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.646658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.647107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.647114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.647612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.647620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.647943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.647954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.648414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.648882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.648890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.649251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.649260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.649607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.649615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.650070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.650080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.650307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.650315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.441 qpair failed and we were unable to recover it. 00:30:11.441 [2024-07-25 17:09:31.650530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.441 [2024-07-25 17:09:31.650539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.650994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.651002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.651461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.651469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.651930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.651938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.652395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.652403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.652882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.652891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.653122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.653130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.653598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.653607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.654062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.654070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.654616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.654645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.654894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.654903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.655475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.655504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.655972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.655981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.656490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.656518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.656984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.656993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.657569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.657598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.658065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.658074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.658513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.658542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.659007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.659017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.659566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.659595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.660067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.660077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.660646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.660675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.660973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.660983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.661537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.661566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.662033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.662042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.662567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.662597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.662963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.662972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.663503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.663532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.663998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.664008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.664591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.664620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.665086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.665097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.665579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.665588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.666055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.666064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.666614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.666646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.667094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.667103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.667659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.667688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.668049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.668059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.668637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.668665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.669112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.442 [2024-07-25 17:09:31.669122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.442 qpair failed and we were unable to recover it. 00:30:11.442 [2024-07-25 17:09:31.669418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.669447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.669555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.669564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.669796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.669804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.670313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.670322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.670786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.670794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.671049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.671057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.671382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.671390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.671862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.671869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.672333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.672341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.672803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.672811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.673289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.673297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.673530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.673538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.674005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.674012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.674483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.674491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.674810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.674818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.675183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.675192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.675644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.675652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.676165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.676174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.676625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.676633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.677070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.677079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.677656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.677685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.678167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.678176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.678798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.678827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.679149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.679159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.679473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.679502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.679987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.679997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.680604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.680633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.681182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.681192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.681653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.681682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.681905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.681918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.682520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.682549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.683024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.683033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.683449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.683478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.683730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.683739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.683932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.683944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.684415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.684424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.684878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.684887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.685111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.685124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.685359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.685368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.685596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.685603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.443 [2024-07-25 17:09:31.686020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.443 [2024-07-25 17:09:31.686028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.443 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.686275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.686283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.686655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.686663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.687124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.687133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.687593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.687601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.688063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.688071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.688557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.688565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.444 [2024-07-25 17:09:31.689025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.444 [2024-07-25 17:09:31.689033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.444 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.689592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.689621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.690094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.690104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.690342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.690350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.690829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.690837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.691175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.691183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.691658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.691667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.692146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.692155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.692605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.692613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.692987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.692995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.693500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.693529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.694017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.694027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.694590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.694619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.694847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.694857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.695442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.695471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.695955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.695965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.696518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.696547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.697014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.697023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.697588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.697617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.697725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.697734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.698209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.698217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.698674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.698682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.698799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.698807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.699251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.699259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.699485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.699493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.699921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.699929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.700164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.700172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.700634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.700646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.701101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.713 [2024-07-25 17:09:31.701110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.713 qpair failed and we were unable to recover it. 00:30:11.713 [2024-07-25 17:09:31.701360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.701367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.701823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.701831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.702315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.702323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.702818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.702826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.703291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.703299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.703746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.703754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.704237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.704245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.704715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.704723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.705071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.705079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.705565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.705575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.706060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.706068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.706613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.706641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.707113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.707123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.707675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.707704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.708066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.708075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.708617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.708646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.709115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.709125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.709685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.709714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.710198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.710212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.710772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.710800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.711417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.711447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.711914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.711924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.712466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.712495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.712963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.712972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.713539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.713568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.714044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.714054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.714498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.714528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.715002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.715012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.715581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.715610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.715941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.715951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.716534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.716564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.716813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.716822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.717283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.717291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.717764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.717773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.718005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.718013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.718470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.718478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.718947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.718955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.719180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.719188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.719469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.719481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.719932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.719941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.720168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.720177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.720632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.720640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.721125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.721133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.721683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.721712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.722182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.722192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.722742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.722771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.723423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.714 [2024-07-25 17:09:31.723452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.714 qpair failed and we were unable to recover it. 00:30:11.714 [2024-07-25 17:09:31.723925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.723935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.724478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.724507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.724974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.724985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.725528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.725557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.726011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.726021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.726618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.726647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.727169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.727179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.727806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.727836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.728074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.728084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.728552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.728561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.728813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.728821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.729303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.729312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.729762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.729770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.730117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.730125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.730587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.730821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.730835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.731062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.731070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.731552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.731560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.732101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.732111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.732323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.732334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.732854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.732864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.733240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.733248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.733707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.733715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.733932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.733942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.734352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.734362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.734580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.734590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.735023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.735032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.735517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.735525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.735994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.736002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.736485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.736494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.736739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.736747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.737234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.737245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.737706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.737715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.738182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.738190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.738438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.738446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.738892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.738900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.739357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.739365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.739823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.739831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.740288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.740296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.740777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.740786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.741243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.741251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.741758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.741766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.742221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.742229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.742689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.742698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.742948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.742956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.743157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.743165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.743708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.743716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.744173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.744182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.744637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.744645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.745101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.745109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.745336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.745345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.745805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.745814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.746065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.746074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.746513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.746521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.746979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.746987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.747569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.747598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.747918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.747928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.748482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.748511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.748763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.748773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.749226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.749234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.749568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.749577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.749971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.749979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.750442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.750450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.750684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.750692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.751106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.751115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.751566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.751574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.752023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.752031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.752515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.752524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.752989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.752998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.753382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.753411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.753765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.753775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.715 [2024-07-25 17:09:31.754259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.715 [2024-07-25 17:09:31.754271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.715 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.754781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.754790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.755254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.755262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.755725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.755733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.756213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.756222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.756677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.756686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.757131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.757139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.757407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.757911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.757919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.758368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.758376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.758568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.758577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.759045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.759053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.759534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.759542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.759869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.759877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.760343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.760352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.760569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.760577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.761074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.761082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.761569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.761577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.762042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.762050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.762504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.762534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.762980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.762990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.763466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.763495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.763964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.763974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.764407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.764436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.764918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.764929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.765486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.765516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.765961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.765970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.766526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.766555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.767042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.767052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.767651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.767680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.768002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.768012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.768576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.768605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.768980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.768991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.769565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.769594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.770074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.770083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.770570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.770578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.770952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.770960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.771535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.771564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.772033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.772042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.772413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.772442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.772764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.772776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.773241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.773249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.773705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.773713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.774075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.774083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.774546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.774555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.774920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.774928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.775480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.775509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.775979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.775989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.776414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.776443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.776910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.776920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.777482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.777511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.778075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.778085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.778473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.778482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.778716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.778724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.779193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.779204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.779688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.779696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.780172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.780180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.780806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.780835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.781454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.781483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.781954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.781964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.782534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.782563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.716 [2024-07-25 17:09:31.783034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.716 [2024-07-25 17:09:31.783043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.716 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.783421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.783450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.783675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.783688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.783900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.783911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.784376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.784384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.784618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.784626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.784862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.784871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.785196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.785207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.785412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.785422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.785613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.785625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.786101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.786109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.786572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.786580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.787114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.787462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.787470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.787943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.787951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.788174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.788184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.788402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.788412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.788885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.788892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.789119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.789126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.789638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.789649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.790094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.790102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.790654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.790662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.790980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.790987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.791476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.791486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.791881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.791890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.792468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.792497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.793031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.793040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.793380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.793409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.793920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.793930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.794497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.794526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.794996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.795005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.795597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.795626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.796095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.796105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.796583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.796592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:11.717 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:11.717 [2024-07-25 17:09:31.797056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.797065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.717 [2024-07-25 17:09:31.797599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.717 [2024-07-25 17:09:31.797628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.717 [2024-07-25 17:09:31.797962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.797972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.798427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.798457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.798878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.798887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.799137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.799144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.799587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.799595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.800038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.800046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.800420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.800447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.800933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.800942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.801369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.801398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.801724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.801734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.801934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.801941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.802406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.802414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.802855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.802863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.803305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.803313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.803769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.803776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.804220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.804227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.804576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.804583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.805021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.805028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.805251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.805258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.805726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.805733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.806174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.806182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.806414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.806424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.806743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.806750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.807138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.807144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.807555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.807562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.808009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.808017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.808483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.808491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.808935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.808942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.809203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.809210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.809673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.809682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.810143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.810152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.810775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.810804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.811416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.811444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.811894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.811902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.717 qpair failed and we were unable to recover it. 00:30:11.717 [2024-07-25 17:09:31.812464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.717 [2024-07-25 17:09:31.812491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.812956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.812964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.813396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.813424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.813738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.813746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.814194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.814206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.814634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.814641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.814968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.814976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.815417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.815445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.815905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.815914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.816421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.816449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.816815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.816824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.817281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.817291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.817794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.817801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.818246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.818253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.818729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.818736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.819166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.819662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.819670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.820108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.820116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.820499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.820506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.820866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.820873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.821228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.821235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.821726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.821733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.822090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.822097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.822239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.822246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.822681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.822688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.823134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.823140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.823468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.823475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.823928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.823937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.824379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.824387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.824829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.824836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.825276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.825283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.825512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.825518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.825986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.825993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.826437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.826444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.826760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.826766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.827124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.827130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.827587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.827594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.828127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.828135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.828632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.828640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.829081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.829088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.829328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.829335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.829551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.829558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.829903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.829909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.830173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.830180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.830608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.830615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.831055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.831063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.831511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.831519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.831957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.832499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.832527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.832725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.832737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.833215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.833223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.833411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.833421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.833676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.834181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.834188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.834413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.834421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.834751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.834758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.835132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.835140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.835588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.835596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.836036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.836043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.718 [2024-07-25 17:09:31.836489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.836497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.718 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.718 [2024-07-25 17:09:31.836891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.836898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.718 [2024-07-25 17:09:31.837344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.837352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.837555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.718 [2024-07-25 17:09:31.837566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.718 qpair failed and we were unable to recover it. 00:30:11.718 [2024-07-25 17:09:31.838032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.838039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.838484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.838491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.838931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.838941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.839472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.839499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.839910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.839918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.840132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.840144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.840349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.840357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.840785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.840791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.841226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.841233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.841473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.841480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.841950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.841957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.842043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.842053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.842482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.842490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.842931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.842937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.843383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.843830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.844079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.844086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.844409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.844415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.844865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.844872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.845309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.845316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.845811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.845818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.846066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.846072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.846428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.846921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.846928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.847369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.847375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.847814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.847820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.848260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.848267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.848715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.848721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.849165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.849171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.849359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.849367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.849856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.849863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.850127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.850133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.850592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.850599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.851078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.851085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.851545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.851552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 Malloc0 00:30:11.719 [2024-07-25 17:09:31.852039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.852046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.719 [2024-07-25 17:09:31.852412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.852440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.719 [2024-07-25 17:09:31.852948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.852957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.719 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.719 [2024-07-25 17:09:31.853495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.853522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.853754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.853762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.854243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.854251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.854711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.854718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.855169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.855175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.855660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.855667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.856106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.856113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.856570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.856577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.856937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.856944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.857482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.857509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.857837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.857846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.858310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.858318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.858796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.858803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.859084] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.719 [2024-07-25 17:09:31.859247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.859255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.859705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.859711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.860075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.860082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.860452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.860459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.860910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.860917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.861405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.861411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.861853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.861859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.862297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.862303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.862749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.862756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.863215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.863221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.863692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.863699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.864139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.864146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.864460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.864468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.864920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.864926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.865393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.865399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.865839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.865846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.866302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.866309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.866645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.866652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.867133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.719 [2024-07-25 17:09:31.867140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.719 qpair failed and we were unable to recover it. 00:30:11.719 [2024-07-25 17:09:31.867593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.867600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.868043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.868050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.720 [2024-07-25 17:09:31.868517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.868545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.720 [2024-07-25 17:09:31.869005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.869014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.720 [2024-07-25 17:09:31.869575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.869603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.870173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.870181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.870719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.870747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.871082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.871090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.871350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.871360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.871691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.871698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.872152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.872159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.872650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.872657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.873095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.873101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.873418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.873425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.873650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.873657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.873920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.873926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.874402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.874408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.874628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.874638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.875119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.875127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.875359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.875366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.875599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.875606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.876086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.876093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.876325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.876333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.876701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.876709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.877159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.877167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.877623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.877630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.878111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.878592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.878599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.878926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.878934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.879249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.879256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.879709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.879716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.720 [2024-07-25 17:09:31.880255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.880263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.720 [2024-07-25 17:09:31.880717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.880724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.720 [2024-07-25 17:09:31.881175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.881185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.881630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.881637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.882076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.882083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.882618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.882645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.882969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.882978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.883197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.883214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.883671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.883678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.884139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.884146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.884753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.884780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.885376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.885404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.885602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.885611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.886019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.886026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.886340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.886347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.886484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.886490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.886825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.886831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.887066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.887073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.887541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.887548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.720 [2024-07-25 17:09:31.888086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.888093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.720 [2024-07-25 17:09:31.888650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.720 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.720 [2024-07-25 17:09:31.889113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.889120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.889478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.889485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.889932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.889938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.890382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.890389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.890852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.890859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.891311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.891318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.891658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.891665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.891898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.891905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.892385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.892392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.892924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.892931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.893144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.893151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.893605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.893612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.894050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.894057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.894490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.894518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.720 [2024-07-25 17:09:31.895008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.720 [2024-07-25 17:09:31.895016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.720 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.895365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.721 [2024-07-25 17:09:31.895574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.721 [2024-07-25 17:09:31.895600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5dcc000b90 with addr=10.0.0.2, port=4420 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.899719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.899820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.899836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.899843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.899847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.899863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.721 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.721 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.721 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.721 [2024-07-25 17:09:31.909702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.909802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.909821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.909828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.909833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.909848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.721 17:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1618401 00:30:11.721 [2024-07-25 17:09:31.919737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.919827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.919846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.919852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.919857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.919872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.929597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.929694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.929708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.929713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.929718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.929730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.939640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.939734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.939747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.939752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.939758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.939770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.949753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.949838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.949851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.949856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.949861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.949872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.959745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.959832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.959851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.959857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.959862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.959878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.721 [2024-07-25 17:09:31.969841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.721 [2024-07-25 17:09:31.969944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.721 [2024-07-25 17:09:31.969963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.721 [2024-07-25 17:09:31.969970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.721 [2024-07-25 17:09:31.969975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.721 [2024-07-25 17:09:31.969990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.721 qpair failed and we were unable to recover it. 00:30:11.983 [2024-07-25 17:09:31.979975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.983 [2024-07-25 17:09:31.980073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.983 [2024-07-25 17:09:31.980093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.983 [2024-07-25 17:09:31.980099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.983 [2024-07-25 17:09:31.980104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.983 [2024-07-25 17:09:31.980120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.983 qpair failed and we were unable to recover it. 00:30:11.983 [2024-07-25 17:09:31.989751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.983 [2024-07-25 17:09:31.989835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.983 [2024-07-25 17:09:31.989849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.983 [2024-07-25 17:09:31.989855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.983 [2024-07-25 17:09:31.989859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.983 [2024-07-25 17:09:31.989872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.983 qpair failed and we were unable to recover it. 00:30:11.983 [2024-07-25 17:09:31.999864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.983 [2024-07-25 17:09:31.999963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.983 [2024-07-25 17:09:31.999982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.983 [2024-07-25 17:09:31.999989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.983 [2024-07-25 17:09:31.999994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.983 [2024-07-25 17:09:32.000009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.983 qpair failed and we were unable to recover it. 00:30:11.983 [2024-07-25 17:09:32.009867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.983 [2024-07-25 17:09:32.009958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.009977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.009983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.009989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.010005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.019934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.020064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.020083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.020091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.020096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.020112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.029966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.030049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.030063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.030072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.030077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.030090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.039962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.040043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.040056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.040061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.040066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.040078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.049874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.049963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.049976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.049982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.049986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.049997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.060003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.060089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.060102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.060108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.060112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.060124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.070053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.070143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.070156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.070161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.070166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.070177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.080114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.080196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.080211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.080217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.080221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.080233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.090094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.090180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.090193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.090199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.090207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.090218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.100159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.100253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.100266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.100271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.100276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.100287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.110184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.110268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.110280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.110286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.110291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.110303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.120184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.120267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.120282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.120288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.120293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.120304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.130231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.130317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.130330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.130336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.130341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.130353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.140253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.140340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.140354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.140359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.140363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.140376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.150271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.150351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.150364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.150370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.150374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.150386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.160284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.160365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.160378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.160383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.160387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.160402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.170341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.170426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.170439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.170444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.170449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.170460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.180418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.180527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.180540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.180546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.180550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.180561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.190655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.190744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.190756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.190763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.190767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.190779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.200471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.200553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.200565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.200572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.200576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.200588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.210482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.210601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.210618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.210623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.210628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.210639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.220482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.220613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.220626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.220631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.220636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.220647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.230505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.984 [2024-07-25 17:09:32.230587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.984 [2024-07-25 17:09:32.230600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.984 [2024-07-25 17:09:32.230608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.984 [2024-07-25 17:09:32.230613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.984 [2024-07-25 17:09:32.230624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.984 qpair failed and we were unable to recover it. 00:30:11.984 [2024-07-25 17:09:32.240517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.985 [2024-07-25 17:09:32.240596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.985 [2024-07-25 17:09:32.240609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.985 [2024-07-25 17:09:32.240615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.985 [2024-07-25 17:09:32.240620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.985 [2024-07-25 17:09:32.240632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.985 qpair failed and we were unable to recover it. 00:30:11.985 [2024-07-25 17:09:32.250493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.985 [2024-07-25 17:09:32.250591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.985 [2024-07-25 17:09:32.250605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.985 [2024-07-25 17:09:32.250611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.985 [2024-07-25 17:09:32.250619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:11.985 [2024-07-25 17:09:32.250631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.985 qpair failed and we were unable to recover it. 00:30:12.246 [2024-07-25 17:09:32.260587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.246 [2024-07-25 17:09:32.260699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.246 [2024-07-25 17:09:32.260712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.246 [2024-07-25 17:09:32.260718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.246 [2024-07-25 17:09:32.260723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.246 [2024-07-25 17:09:32.260734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.246 qpair failed and we were unable to recover it. 00:30:12.246 [2024-07-25 17:09:32.270607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.246 [2024-07-25 17:09:32.270691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.246 [2024-07-25 17:09:32.270704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.246 [2024-07-25 17:09:32.270711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.246 [2024-07-25 17:09:32.270716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.246 [2024-07-25 17:09:32.270728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.246 qpair failed and we were unable to recover it. 00:30:12.246 [2024-07-25 17:09:32.280617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.246 [2024-07-25 17:09:32.280695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.246 [2024-07-25 17:09:32.280707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.246 [2024-07-25 17:09:32.280713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.246 [2024-07-25 17:09:32.280719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.246 [2024-07-25 17:09:32.280731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.246 qpair failed and we were unable to recover it. 00:30:12.246 [2024-07-25 17:09:32.290673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.246 [2024-07-25 17:09:32.290761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.246 [2024-07-25 17:09:32.290773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.290780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.290785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.290796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.300729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.300831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.300851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.300858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.300863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.300878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.310694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.310786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.310806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.310813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.310818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.310834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.320731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.320818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.320838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.320845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.320850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.320865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.330780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.330872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.330891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.330898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.330904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.330920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.340804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.340896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.340915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.340923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.340931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.340947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.350828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.350912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.350932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.350939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.350944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.350959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.360846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.360933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.360952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.360959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.360964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.360979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.370861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.370951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.370965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.370971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.370976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.370989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.380910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.381037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.381051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.381056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.381061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.381072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.390951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.391031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.391044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.391049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.391054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.391066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.400962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.401043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.401062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.401070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.247 [2024-07-25 17:09:32.401074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.247 [2024-07-25 17:09:32.401090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.247 qpair failed and we were unable to recover it. 00:30:12.247 [2024-07-25 17:09:32.411003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.247 [2024-07-25 17:09:32.411086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.247 [2024-07-25 17:09:32.411101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.247 [2024-07-25 17:09:32.411106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.411111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.411123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.421096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.421217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.421228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.421234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.421238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.421249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.431051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.431160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.431174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.431183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.431187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.431199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.440962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.441044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.441057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.441063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.441068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.441080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.451123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.451220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.451233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.451238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.451243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.451255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.461141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.461242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.461256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.461262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.461266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.461278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.471168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.471258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.471270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.471277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.471282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.471294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.481110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.481191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.481208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.481214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.481218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.481230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.491247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.491376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.491390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.491395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.491400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.491412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.501215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.501311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.501324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.501330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.501335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.501346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.248 [2024-07-25 17:09:32.511268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.248 [2024-07-25 17:09:32.511359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.248 [2024-07-25 17:09:32.511372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.248 [2024-07-25 17:09:32.511377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.248 [2024-07-25 17:09:32.511382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.248 [2024-07-25 17:09:32.511395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.248 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.521279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.521362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.521379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.521385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.521389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.521401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.531337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.531420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.531433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.531438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.531444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.531455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.541361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.541451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.541464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.541471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.541475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.541487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.551385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.551464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.551477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.551483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.551489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.551500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.561383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.561471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.561484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.561489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.561495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.561509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.571458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.571590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.571604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.571609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.571614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.571626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.581476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.581560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.581573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.581579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.581583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.581594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.591498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.591578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.591591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.591597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.591602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.591613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.601535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.601615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.601628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.601633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.601639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.601650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.611569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.611653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.611669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.611675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.611680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.611692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.621593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.621685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.510 [2024-07-25 17:09:32.621698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.510 [2024-07-25 17:09:32.621704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.510 [2024-07-25 17:09:32.621709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.510 [2024-07-25 17:09:32.621721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.510 qpair failed and we were unable to recover it. 00:30:12.510 [2024-07-25 17:09:32.631623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.510 [2024-07-25 17:09:32.631706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.631718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.631725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.631730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.631742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.641596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.641682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.641694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.641700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.641705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.641717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.651735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.651817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.651830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.651836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.651841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.651856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.661578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.661663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.661677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.661683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.661688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.661700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.671726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.671811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.671824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.671829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.671834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.671846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.681707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.681796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.681816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.681823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.681828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.681843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.691672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.691765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.691779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.691785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.691790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.691802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.701793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.701884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.701897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.701903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.701908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.701921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.711850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.711934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.711947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.711954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.711959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.711971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.721828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.721914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.721933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.721940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.721945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.721961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.731919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.732009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.732029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.732036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.732041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.732057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.741882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.741975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.741996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.742002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.742011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.742027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.751951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.752041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.752060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.511 [2024-07-25 17:09:32.752067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.511 [2024-07-25 17:09:32.752072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.511 [2024-07-25 17:09:32.752088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.511 qpair failed and we were unable to recover it. 00:30:12.511 [2024-07-25 17:09:32.761947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.511 [2024-07-25 17:09:32.762030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.511 [2024-07-25 17:09:32.762044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.512 [2024-07-25 17:09:32.762051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.512 [2024-07-25 17:09:32.762056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.512 [2024-07-25 17:09:32.762069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.512 qpair failed and we were unable to recover it. 00:30:12.512 [2024-07-25 17:09:32.771980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.512 [2024-07-25 17:09:32.772065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.512 [2024-07-25 17:09:32.772079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.512 [2024-07-25 17:09:32.772084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.512 [2024-07-25 17:09:32.772090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.512 [2024-07-25 17:09:32.772102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.512 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.782027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.782110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.782123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.782128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.782133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.782145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.792064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.792144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.792157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.792163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.792168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.792180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.802052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.802162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.802178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.802183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.802188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.802205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.812058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.812147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.812160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.812166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.812171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.812183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.822138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.822226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.822240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.822246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.822251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.822264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.832172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.832263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.832277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.832286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.832291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.832304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.842214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.842291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.842304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.842310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.842314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.842328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.852118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.852206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.852219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.852225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.852229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.852241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.862275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.773 [2024-07-25 17:09:32.862363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.773 [2024-07-25 17:09:32.862376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.773 [2024-07-25 17:09:32.862382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.773 [2024-07-25 17:09:32.862386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.773 [2024-07-25 17:09:32.862397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.773 qpair failed and we were unable to recover it. 00:30:12.773 [2024-07-25 17:09:32.872288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.872371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.872384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.872389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.872394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.872406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.882332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.882418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.882432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.882437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.882441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.882453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.892339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.892560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.892574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.892580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.892585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.892596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.902386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.902474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.902486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.902492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.902497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.902509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.912417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.912499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.912512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.912518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.912524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.912535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.922413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.922498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.922514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.922520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.922525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.922536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.932477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.932561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.932573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.932579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.932584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.932596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.942488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.942574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.942587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.942593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.942598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.942611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.952511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.952593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.952605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.952611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.952615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.952627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.962512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.962593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.962607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.962613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.962618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.962630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.972564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.972649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.972662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.972668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.972672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.972684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.982575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.982661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.982674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.982680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.982685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.982696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:32.992642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:32.992737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.774 [2024-07-25 17:09:32.992750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.774 [2024-07-25 17:09:32.992756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.774 [2024-07-25 17:09:32.992760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.774 [2024-07-25 17:09:32.992772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.774 qpair failed and we were unable to recover it. 00:30:12.774 [2024-07-25 17:09:33.002643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.774 [2024-07-25 17:09:33.002729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.775 [2024-07-25 17:09:33.002742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.775 [2024-07-25 17:09:33.002748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.775 [2024-07-25 17:09:33.002753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.775 [2024-07-25 17:09:33.002764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.775 qpair failed and we were unable to recover it. 00:30:12.775 [2024-07-25 17:09:33.012702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.775 [2024-07-25 17:09:33.012807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.775 [2024-07-25 17:09:33.012823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.775 [2024-07-25 17:09:33.012829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.775 [2024-07-25 17:09:33.012834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.775 [2024-07-25 17:09:33.012845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.775 qpair failed and we were unable to recover it. 00:30:12.775 [2024-07-25 17:09:33.022686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.775 [2024-07-25 17:09:33.022781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.775 [2024-07-25 17:09:33.022801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.775 [2024-07-25 17:09:33.022808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.775 [2024-07-25 17:09:33.022813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.775 [2024-07-25 17:09:33.022829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.775 qpair failed and we were unable to recover it. 00:30:12.775 [2024-07-25 17:09:33.032798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.775 [2024-07-25 17:09:33.032915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.775 [2024-07-25 17:09:33.032935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.775 [2024-07-25 17:09:33.032941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.775 [2024-07-25 17:09:33.032946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.775 [2024-07-25 17:09:33.032962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.775 qpair failed and we were unable to recover it. 00:30:12.775 [2024-07-25 17:09:33.042740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.775 [2024-07-25 17:09:33.042828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.775 [2024-07-25 17:09:33.042848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.775 [2024-07-25 17:09:33.042855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.775 [2024-07-25 17:09:33.042860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:12.775 [2024-07-25 17:09:33.042875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.775 qpair failed and we were unable to recover it. 00:30:13.036 [2024-07-25 17:09:33.052877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.036 [2024-07-25 17:09:33.053002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.036 [2024-07-25 17:09:33.053022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.036 [2024-07-25 17:09:33.053029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.036 [2024-07-25 17:09:33.053034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.036 [2024-07-25 17:09:33.053054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.036 qpair failed and we were unable to recover it. 00:30:13.036 [2024-07-25 17:09:33.062823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.036 [2024-07-25 17:09:33.062914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.036 [2024-07-25 17:09:33.062933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.036 [2024-07-25 17:09:33.062940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.036 [2024-07-25 17:09:33.062945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.036 [2024-07-25 17:09:33.062961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.036 qpair failed and we were unable to recover it. 00:30:13.036 [2024-07-25 17:09:33.072851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.036 [2024-07-25 17:09:33.072941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.036 [2024-07-25 17:09:33.072960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.036 [2024-07-25 17:09:33.072967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.036 [2024-07-25 17:09:33.072972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.036 [2024-07-25 17:09:33.072988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.036 qpair failed and we were unable to recover it. 00:30:13.036 [2024-07-25 17:09:33.082847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.036 [2024-07-25 17:09:33.082936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.036 [2024-07-25 17:09:33.082956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.036 [2024-07-25 17:09:33.082963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.082967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.082983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.092917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.093002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.093016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.093023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.093027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.093040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.102933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.103019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.103036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.103043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.103047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.103060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.112976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.113085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.113105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.113111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.113116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.113131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.122967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.123052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.123066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.123073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.123078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.123091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.133033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.133127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.133140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.133147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.133152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.133164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.143052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.143142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.143156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.143162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.143171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.143183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.153083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.153168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.153181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.153187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.153192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.153208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.163020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.163101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.163114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.163119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.163125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.163137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.173103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.173186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.173198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.173208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.173213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.173225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.183089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.183175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.183188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.183194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.183198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.183215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.193188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.193304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.193318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.193324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.193328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.193340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.203226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.203346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.203360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.203365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.203370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.037 [2024-07-25 17:09:33.203382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.037 qpair failed and we were unable to recover it. 00:30:13.037 [2024-07-25 17:09:33.213248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.037 [2024-07-25 17:09:33.213335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.037 [2024-07-25 17:09:33.213348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.037 [2024-07-25 17:09:33.213354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.037 [2024-07-25 17:09:33.213358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.213370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.223240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.223322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.223334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.223340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.223345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.223357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.233227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.233351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.233364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.233373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.233377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.233390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.243228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.243322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.243337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.243342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.243347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.243359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.253361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.253445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.253457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.253463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.253468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.253480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.263364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.263452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.263464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.263470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.263475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.263487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.273327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.273467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.273480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.273486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.273491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.273503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.283431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.283516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.283529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.283535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.283540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.283551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.293475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.293580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.293593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.293599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.293603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.293615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.038 [2024-07-25 17:09:33.303486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.038 [2024-07-25 17:09:33.303577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.038 [2024-07-25 17:09:33.303590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.038 [2024-07-25 17:09:33.303596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.038 [2024-07-25 17:09:33.303601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.038 [2024-07-25 17:09:33.303612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.038 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.313530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.313616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.313629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.313635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.313640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.313651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.323536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.323618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.323631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.323640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.323645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.323657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.333569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.333655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.333668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.333674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.333678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.333690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.343577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.343663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.343676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.343681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.343686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.343699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.353637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.353724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.353737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.353743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.353748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.353760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.363662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.363743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.363756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.363761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.363767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.363778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.373699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.373786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.373805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.373812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.373817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.373833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.383706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.383799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.383818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.383825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.383830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.383846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.393758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.393860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.393880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.393887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.393892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.393908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.403737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.403825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.403844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.298 [2024-07-25 17:09:33.403850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.298 [2024-07-25 17:09:33.403855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.298 [2024-07-25 17:09:33.403871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-25 17:09:33.413810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.298 [2024-07-25 17:09:33.413898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.298 [2024-07-25 17:09:33.413921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.413928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.413933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.413949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.423744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.423836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.423850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.423856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.423861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.423873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.433848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.433930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.433942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.433949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.433954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.433966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.443868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.443949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.443962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.443968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.443974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.443986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.453884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.453969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.453982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.453987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.453993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.454008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.463936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.464022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.464034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.464040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.464045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.464057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.473964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.474055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.474075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.474082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.474087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.474103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.483969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.484055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.484068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.484075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.484080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.484093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.494032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.494131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.494145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.494152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.494156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.494168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.504044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.504175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.504192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.504198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.504206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.504218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.514116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.514196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.514214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.514219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.514224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.514237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.524118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.524196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.524212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.524218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.524223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.524235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.534144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.534384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.534399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.534405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.534409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.534421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.544173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.544264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.544277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.544284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.544291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.544303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.554197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.554285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.554298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.554305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.554309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.299 [2024-07-25 17:09:33.554321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-25 17:09:33.564226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.299 [2024-07-25 17:09:33.564304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.299 [2024-07-25 17:09:33.564317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.299 [2024-07-25 17:09:33.564322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.299 [2024-07-25 17:09:33.564328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.300 [2024-07-25 17:09:33.564339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.574265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.574355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.561 [2024-07-25 17:09:33.574367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.561 [2024-07-25 17:09:33.574373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.561 [2024-07-25 17:09:33.574378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.561 [2024-07-25 17:09:33.574390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.561 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.584294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.584385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.561 [2024-07-25 17:09:33.584397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.561 [2024-07-25 17:09:33.584404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.561 [2024-07-25 17:09:33.584409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.561 [2024-07-25 17:09:33.584420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.561 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.594305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.594406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.561 [2024-07-25 17:09:33.594419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.561 [2024-07-25 17:09:33.594425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.561 [2024-07-25 17:09:33.594430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.561 [2024-07-25 17:09:33.594441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.561 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.604366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.604466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.561 [2024-07-25 17:09:33.604479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.561 [2024-07-25 17:09:33.604484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.561 [2024-07-25 17:09:33.604489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.561 [2024-07-25 17:09:33.604501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.561 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.614385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.614487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.561 [2024-07-25 17:09:33.614501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.561 [2024-07-25 17:09:33.614506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.561 [2024-07-25 17:09:33.614511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.561 [2024-07-25 17:09:33.614523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.561 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.624397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.624492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.561 [2024-07-25 17:09:33.624505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.561 [2024-07-25 17:09:33.624511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.561 [2024-07-25 17:09:33.624516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.561 [2024-07-25 17:09:33.624528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.561 qpair failed and we were unable to recover it. 00:30:13.561 [2024-07-25 17:09:33.634431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.561 [2024-07-25 17:09:33.634509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.634522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.634531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.634536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.634548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.644465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.644545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.644558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.644563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.644568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.644581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.654511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.654641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.654653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.654659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.654664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.654674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.664527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.664612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.664624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.664630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.664635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.664646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.674552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.674636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.674649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.674655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.674659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.674671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.684538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.684617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.684629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.684635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.684641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.684652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.694605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.694689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.694702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.694708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.694713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.694725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.704625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.704718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.704731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.704737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.704741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.704752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.714632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.714711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.714723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.714729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.714734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.714745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.724681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.724761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.724773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.724782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.724787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.724799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.734694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.734783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.734803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.734810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.734815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.734831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.744704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.744791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.744805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.744811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.744816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.744829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.754754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.754845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.754865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.754872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.562 [2024-07-25 17:09:33.754877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.562 [2024-07-25 17:09:33.754893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.562 qpair failed and we were unable to recover it. 00:30:13.562 [2024-07-25 17:09:33.764750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.562 [2024-07-25 17:09:33.764850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.562 [2024-07-25 17:09:33.764864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.562 [2024-07-25 17:09:33.764870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.764875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.764887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.563 [2024-07-25 17:09:33.774694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.563 [2024-07-25 17:09:33.774783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.563 [2024-07-25 17:09:33.774796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.563 [2024-07-25 17:09:33.774802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.774807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.774818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.563 [2024-07-25 17:09:33.784854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.563 [2024-07-25 17:09:33.784957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.563 [2024-07-25 17:09:33.784970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.563 [2024-07-25 17:09:33.784976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.784980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.784992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.563 [2024-07-25 17:09:33.794883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.563 [2024-07-25 17:09:33.794973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.563 [2024-07-25 17:09:33.794993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.563 [2024-07-25 17:09:33.795000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.795005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.795021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.563 [2024-07-25 17:09:33.804918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.563 [2024-07-25 17:09:33.805008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.563 [2024-07-25 17:09:33.805027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.563 [2024-07-25 17:09:33.805034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.805039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.805054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.563 [2024-07-25 17:09:33.814903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.563 [2024-07-25 17:09:33.814992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.563 [2024-07-25 17:09:33.815014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.563 [2024-07-25 17:09:33.815021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.815026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.815042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.563 [2024-07-25 17:09:33.824840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.563 [2024-07-25 17:09:33.824942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.563 [2024-07-25 17:09:33.824961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.563 [2024-07-25 17:09:33.824968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.563 [2024-07-25 17:09:33.824972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.563 [2024-07-25 17:09:33.824988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.563 qpair failed and we were unable to recover it. 00:30:13.826 [2024-07-25 17:09:33.834993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.826 [2024-07-25 17:09:33.835116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.826 [2024-07-25 17:09:33.835136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.826 [2024-07-25 17:09:33.835143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.826 [2024-07-25 17:09:33.835148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.826 [2024-07-25 17:09:33.835164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.826 qpair failed and we were unable to recover it. 00:30:13.826 [2024-07-25 17:09:33.844896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.826 [2024-07-25 17:09:33.844977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.826 [2024-07-25 17:09:33.844991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.826 [2024-07-25 17:09:33.844997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.826 [2024-07-25 17:09:33.845002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.826 [2024-07-25 17:09:33.845016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.826 qpair failed and we were unable to recover it. 00:30:13.826 [2024-07-25 17:09:33.855008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.826 [2024-07-25 17:09:33.855092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.826 [2024-07-25 17:09:33.855105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.826 [2024-07-25 17:09:33.855111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.826 [2024-07-25 17:09:33.855115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.826 [2024-07-25 17:09:33.855131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.826 qpair failed and we were unable to recover it. 00:30:13.826 [2024-07-25 17:09:33.865031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.826 [2024-07-25 17:09:33.865125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.826 [2024-07-25 17:09:33.865139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.826 [2024-07-25 17:09:33.865145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.826 [2024-07-25 17:09:33.865149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.826 [2024-07-25 17:09:33.865161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.826 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.875095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.875175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.875188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.875193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.875198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.875213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.885034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.885115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.885127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.885133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.885137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.885149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.895161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.895248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.895261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.895268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.895272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.895284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.905169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.905259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.905275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.905282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.905286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.905298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.915215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.915296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.915308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.915314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.915319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.915331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.925245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.925374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.925387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.925393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.925397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.925409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.935269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.935359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.935372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.935378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.935383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.935395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.945455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.945546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.945558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.945564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.945572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.945584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.955298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.955385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.955398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.955404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.955409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.955421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.965327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.965409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.965421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.965427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.965433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.965444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.975339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.975431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.975443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.975450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.975455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.975466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.985383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.985478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.985491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.985496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.985501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.985513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:33.995354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.827 [2024-07-25 17:09:33.995441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.827 [2024-07-25 17:09:33.995454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.827 [2024-07-25 17:09:33.995460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.827 [2024-07-25 17:09:33.995465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.827 [2024-07-25 17:09:33.995477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.827 qpair failed and we were unable to recover it. 00:30:13.827 [2024-07-25 17:09:34.005448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.005565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.005578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.005584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.005588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.005600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.015481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.015564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.015577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.015582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.015587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.015599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.025513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.025598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.025611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.025616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.025621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.025633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.035409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.035653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.035667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.035672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.035680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.035692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.045579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.045660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.045673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.045679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.045683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.045696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.055598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.055683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.055696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.055702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.055706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.055718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.065645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.065773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.065786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.065792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.065797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.065808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.075610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.075692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.075711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.075718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.075723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.075738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.085637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.085722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.085736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.085741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.085746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.085759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:13.828 [2024-07-25 17:09:34.095785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.828 [2024-07-25 17:09:34.095888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.828 [2024-07-25 17:09:34.095901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.828 [2024-07-25 17:09:34.095907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.828 [2024-07-25 17:09:34.095912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:13.828 [2024-07-25 17:09:34.095924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.828 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.105768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.105860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.091 [2024-07-25 17:09:34.105879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.091 [2024-07-25 17:09:34.105886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.091 [2024-07-25 17:09:34.105891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.091 [2024-07-25 17:09:34.105906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.115763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.115852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.091 [2024-07-25 17:09:34.115871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.091 [2024-07-25 17:09:34.115877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.091 [2024-07-25 17:09:34.115882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.091 [2024-07-25 17:09:34.115898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.125793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.125907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.091 [2024-07-25 17:09:34.125926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.091 [2024-07-25 17:09:34.125937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.091 [2024-07-25 17:09:34.125942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.091 [2024-07-25 17:09:34.125958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.135832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.135957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.091 [2024-07-25 17:09:34.135977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.091 [2024-07-25 17:09:34.135984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.091 [2024-07-25 17:09:34.135989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.091 [2024-07-25 17:09:34.136004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.145859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.145947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.091 [2024-07-25 17:09:34.145966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.091 [2024-07-25 17:09:34.145972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.091 [2024-07-25 17:09:34.145977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.091 [2024-07-25 17:09:34.145992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.155877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.155959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.091 [2024-07-25 17:09:34.155973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.091 [2024-07-25 17:09:34.155978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.091 [2024-07-25 17:09:34.155983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.091 [2024-07-25 17:09:34.155995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.091 qpair failed and we were unable to recover it. 00:30:14.091 [2024-07-25 17:09:34.165897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.091 [2024-07-25 17:09:34.165978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.165990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.165996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.166001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.166013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.175922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.176006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.176019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.176025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.176029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.176042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.185983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.186073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.186093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.186099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.186104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.186120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.196061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.196148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.196162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.196169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.196173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.196186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.206079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.206162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.206175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.206181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.206186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.206198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.216096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.216195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.216231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.216238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.216242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.216254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.226055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.226139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.226152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.226157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.226162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.226174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.236091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.236194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.236211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.236217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.236221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.236233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.246129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.246219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.246232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.246238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.246243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.246255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.256178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.256270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.256283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.256289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.256293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.256308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.266077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.266163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.266177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.266183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.266188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.266205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.276224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.276311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.276324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.276330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.276335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.276348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.286264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.286386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.286399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.092 [2024-07-25 17:09:34.286405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.092 [2024-07-25 17:09:34.286410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.092 [2024-07-25 17:09:34.286422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.092 qpair failed and we were unable to recover it. 00:30:14.092 [2024-07-25 17:09:34.296279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.092 [2024-07-25 17:09:34.296364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.092 [2024-07-25 17:09:34.296376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.296382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.296387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.296399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.093 [2024-07-25 17:09:34.306324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.093 [2024-07-25 17:09:34.306411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.093 [2024-07-25 17:09:34.306426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.306433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.306437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.306449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.093 [2024-07-25 17:09:34.316348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.093 [2024-07-25 17:09:34.316445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.093 [2024-07-25 17:09:34.316458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.316464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.316468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.316480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.093 [2024-07-25 17:09:34.326346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.093 [2024-07-25 17:09:34.326478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.093 [2024-07-25 17:09:34.326491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.326497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.326501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.326514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.093 [2024-07-25 17:09:34.336328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.093 [2024-07-25 17:09:34.336418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.093 [2024-07-25 17:09:34.336431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.336436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.336440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.336452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.093 [2024-07-25 17:09:34.346434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.093 [2024-07-25 17:09:34.346516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.093 [2024-07-25 17:09:34.346528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.346534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.346538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.346554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.093 [2024-07-25 17:09:34.356453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.093 [2024-07-25 17:09:34.356543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.093 [2024-07-25 17:09:34.356556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.093 [2024-07-25 17:09:34.356562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.093 [2024-07-25 17:09:34.356566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.093 [2024-07-25 17:09:34.356578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.093 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.366450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.366529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.366542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.366548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.366553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.366565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.376524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.376657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.376671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.376676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.376682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.376694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.386423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.386507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.386520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.386525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.386529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.386542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.396425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.396516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.396529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.396535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.396540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.396552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.406604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.406728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.406741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.406747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.406752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.406765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.416627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.416732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.416745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.416751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.416756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.416767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.426614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.426696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.426709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.426715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.426719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.426731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.436661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.436748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.436761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.436767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.436775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.436786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.446689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.446782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.446802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.446809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.356 [2024-07-25 17:09:34.446814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.356 [2024-07-25 17:09:34.446830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.356 qpair failed and we were unable to recover it. 00:30:14.356 [2024-07-25 17:09:34.456735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.356 [2024-07-25 17:09:34.456821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.356 [2024-07-25 17:09:34.456835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.356 [2024-07-25 17:09:34.456841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.456846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.456858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.466714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.466804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.466824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.466831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.466836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.466851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.476836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.476951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.476971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.476978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.476982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.476998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.486737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.486823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.486843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.486850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.486855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.486871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.496774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.496866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.496885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.496893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.496898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.496914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.506841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.506937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.506956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.506963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.506968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.506983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.516906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.516993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.517013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.517020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.517025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.517041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.526748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.526829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.526844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.526857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.526862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.526874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.536921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.537008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.537021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.537027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.537032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.537045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.546799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.546885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.546898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.546904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.546909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.546921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.556917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.557001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.557016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.557022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.557026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.557039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.566948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.567023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.567036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.567042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.567046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.567058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.577098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.577184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.577198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.577211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.577216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.577229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.357 qpair failed and we were unable to recover it. 00:30:14.357 [2024-07-25 17:09:34.587174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.357 [2024-07-25 17:09:34.587267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.357 [2024-07-25 17:09:34.587281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.357 [2024-07-25 17:09:34.587287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.357 [2024-07-25 17:09:34.587292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.357 [2024-07-25 17:09:34.587304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.358 qpair failed and we were unable to recover it. 00:30:14.358 [2024-07-25 17:09:34.597101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.358 [2024-07-25 17:09:34.597183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.358 [2024-07-25 17:09:34.597196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.358 [2024-07-25 17:09:34.597204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.358 [2024-07-25 17:09:34.597209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.358 [2024-07-25 17:09:34.597221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.358 qpair failed and we were unable to recover it. 00:30:14.358 [2024-07-25 17:09:34.607086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.358 [2024-07-25 17:09:34.607166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.358 [2024-07-25 17:09:34.607178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.358 [2024-07-25 17:09:34.607183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.358 [2024-07-25 17:09:34.607188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.358 [2024-07-25 17:09:34.607203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.358 qpair failed and we were unable to recover it. 00:30:14.358 [2024-07-25 17:09:34.617068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.358 [2024-07-25 17:09:34.617155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.358 [2024-07-25 17:09:34.617172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.358 [2024-07-25 17:09:34.617178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.358 [2024-07-25 17:09:34.617183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.358 [2024-07-25 17:09:34.617194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.358 qpair failed and we were unable to recover it. 00:30:14.358 [2024-07-25 17:09:34.627140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.358 [2024-07-25 17:09:34.627229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.358 [2024-07-25 17:09:34.627242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.358 [2024-07-25 17:09:34.627247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.358 [2024-07-25 17:09:34.627252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.358 [2024-07-25 17:09:34.627263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.358 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.637192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.637281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.637294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.637301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.637305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.637317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.647173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.647252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.647265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.647271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.647275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.647289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.657277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.657362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.657375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.657381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.657385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.657397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.667267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.667355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.667368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.667373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.667378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.667389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.677325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.677407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.677420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.677425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.677430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.677442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.687302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.687376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.687389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.687394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.687399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.687410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.697338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.697461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.697474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.697480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.697484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.697496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.707228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.707312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.707328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.707334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.707339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.628 [2024-07-25 17:09:34.707351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-07-25 17:09:34.717428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.628 [2024-07-25 17:09:34.717534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.628 [2024-07-25 17:09:34.717548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.628 [2024-07-25 17:09:34.717554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.628 [2024-07-25 17:09:34.717558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.717569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.727335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.727447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.727459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.727465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.727469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.727481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.737540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.737628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.737641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.737648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.737652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.737664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.747482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.747594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.747607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.747613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.747617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.747632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.757516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.757601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.757614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.757620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.757625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.757636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.767517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.767594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.767606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.767612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.767617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.767629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.777585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.777671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.777684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.777690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.777695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.777707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.787526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.787606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.787619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.787625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.787629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.787641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.797604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.797689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.797704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.797710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.797715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.797726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.807616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.807694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.807706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.807712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.807716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.807729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.817568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.817667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.817681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.817687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.817691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.817703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.827725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.827852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.827865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.827871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.827875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.827887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.837916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.838007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.838026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.838033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.838041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.838058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-07-25 17:09:34.847703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.629 [2024-07-25 17:09:34.847804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.629 [2024-07-25 17:09:34.847824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.629 [2024-07-25 17:09:34.847831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.629 [2024-07-25 17:09:34.847836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.629 [2024-07-25 17:09:34.847851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-07-25 17:09:34.857807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.630 [2024-07-25 17:09:34.857933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.630 [2024-07-25 17:09:34.857953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.630 [2024-07-25 17:09:34.857959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.630 [2024-07-25 17:09:34.857964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.630 [2024-07-25 17:09:34.857980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-07-25 17:09:34.867789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.630 [2024-07-25 17:09:34.867881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.630 [2024-07-25 17:09:34.867901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.630 [2024-07-25 17:09:34.867908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.630 [2024-07-25 17:09:34.867912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.630 [2024-07-25 17:09:34.867928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-07-25 17:09:34.877853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.630 [2024-07-25 17:09:34.877939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.630 [2024-07-25 17:09:34.877959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.630 [2024-07-25 17:09:34.877966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.630 [2024-07-25 17:09:34.877971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.630 [2024-07-25 17:09:34.877986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-07-25 17:09:34.887806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.630 [2024-07-25 17:09:34.887923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.630 [2024-07-25 17:09:34.887943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.630 [2024-07-25 17:09:34.887949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.630 [2024-07-25 17:09:34.887954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.630 [2024-07-25 17:09:34.887970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-07-25 17:09:34.897890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.630 [2024-07-25 17:09:34.897974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.630 [2024-07-25 17:09:34.897988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.630 [2024-07-25 17:09:34.897994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.630 [2024-07-25 17:09:34.897999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.630 [2024-07-25 17:09:34.898012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.892 [2024-07-25 17:09:34.907911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.892 [2024-07-25 17:09:34.908005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.892 [2024-07-25 17:09:34.908019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.892 [2024-07-25 17:09:34.908025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.892 [2024-07-25 17:09:34.908030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.892 [2024-07-25 17:09:34.908042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.892 qpair failed and we were unable to recover it. 00:30:14.892 [2024-07-25 17:09:34.917866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.917956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.917975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.917982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.917987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.918003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.927945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.928025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.928039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.928049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.928054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.928067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.938030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.938126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.938145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.938152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.938157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.938174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.948014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.948098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.948113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.948118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.948123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.948136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.958068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.958147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.958160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.958166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.958171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.958183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.968038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.968113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.968126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.968132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.968136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.968149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.978105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.978190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.978207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.978214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.978218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.978231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.988097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.988179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.988192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.988198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.988208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.988220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:34.998218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:34.998300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:34.998313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:34.998318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:34.998323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:34.998335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:35.008151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:35.008234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:35.008247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:35.008252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:35.008257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:35.008270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:35.018231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:35.018319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:35.018332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:35.018341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:35.018345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:35.018358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:35.028245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:35.028330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:35.028343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:35.028349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:35.028353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:35.028365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:35.038257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:35.038361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.893 [2024-07-25 17:09:35.038375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.893 [2024-07-25 17:09:35.038381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.893 [2024-07-25 17:09:35.038386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.893 [2024-07-25 17:09:35.038401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.893 qpair failed and we were unable to recover it. 00:30:14.893 [2024-07-25 17:09:35.048282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.893 [2024-07-25 17:09:35.048355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.048368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.048374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.048379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.048391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.058351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.058436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.058448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.058454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.058459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.058471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.068437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.068538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.068552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.068557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.068562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.068574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.078337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.078412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.078425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.078431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.078435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.078448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.088360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.088476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.088489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.088495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.088500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.088512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.098456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.098540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.098552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.098558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.098564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.098576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.108458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.108542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.108557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.108563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.108568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.108580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.118452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.118532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.118544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.118550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.118555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.118566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.128466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.128542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.128554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.128560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.128565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.128577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.138561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.138680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.138693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.138699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.138704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.138715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.148517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.148614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.148627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.148633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.148637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.148652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:14.894 [2024-07-25 17:09:35.158636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.894 [2024-07-25 17:09:35.158713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.894 [2024-07-25 17:09:35.158726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.894 [2024-07-25 17:09:35.158731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.894 [2024-07-25 17:09:35.158736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:14.894 [2024-07-25 17:09:35.158748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.894 qpair failed and we were unable to recover it. 00:30:15.157 [2024-07-25 17:09:35.168580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.157 [2024-07-25 17:09:35.168656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.157 [2024-07-25 17:09:35.168669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.157 [2024-07-25 17:09:35.168674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.157 [2024-07-25 17:09:35.168679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.157 [2024-07-25 17:09:35.168692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.157 qpair failed and we were unable to recover it. 00:30:15.157 [2024-07-25 17:09:35.178600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.157 [2024-07-25 17:09:35.178695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.157 [2024-07-25 17:09:35.178709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.157 [2024-07-25 17:09:35.178714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.157 [2024-07-25 17:09:35.178719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.157 [2024-07-25 17:09:35.178731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.157 qpair failed and we were unable to recover it. 00:30:15.157 [2024-07-25 17:09:35.188634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.157 [2024-07-25 17:09:35.188719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.157 [2024-07-25 17:09:35.188732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.157 [2024-07-25 17:09:35.188737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.157 [2024-07-25 17:09:35.188742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.157 [2024-07-25 17:09:35.188754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.157 qpair failed and we were unable to recover it. 00:30:15.157 [2024-07-25 17:09:35.198720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.157 [2024-07-25 17:09:35.198802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.157 [2024-07-25 17:09:35.198817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.157 [2024-07-25 17:09:35.198823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.157 [2024-07-25 17:09:35.198828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.157 [2024-07-25 17:09:35.198840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.157 qpair failed and we were unable to recover it. 00:30:15.157 [2024-07-25 17:09:35.208691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.157 [2024-07-25 17:09:35.208817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.157 [2024-07-25 17:09:35.208836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.157 [2024-07-25 17:09:35.208843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.157 [2024-07-25 17:09:35.208848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.157 [2024-07-25 17:09:35.208863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.157 qpair failed and we were unable to recover it. 00:30:15.157 [2024-07-25 17:09:35.218781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.157 [2024-07-25 17:09:35.218891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.157 [2024-07-25 17:09:35.218910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.218917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.218922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.218938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.228781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.228866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.228885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.228893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.228897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.228913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.238817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.238900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.238919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.238927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.238935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.238951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.248871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.248953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.248973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.248980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.248984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.249000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.258908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.258998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.259018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.259024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.259029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.259045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.268904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.268993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.269012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.269019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.269024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.269040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.278891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.278967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.278981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.278986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.278991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.279005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.288920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.289020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.289040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.289047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.289052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.289067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.299027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.299121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.299135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.299141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.299146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.299159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.308877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.308959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.308972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.308978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.308982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.308994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.319024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.319103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.319116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.319121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.319126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.319138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.329016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.329096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.329109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.329119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.329124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.329135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.339088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.339174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.339187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.158 [2024-07-25 17:09:35.339193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.158 [2024-07-25 17:09:35.339198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.158 [2024-07-25 17:09:35.339213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.158 qpair failed and we were unable to recover it. 00:30:15.158 [2024-07-25 17:09:35.349090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.158 [2024-07-25 17:09:35.349229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.158 [2024-07-25 17:09:35.349243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.349249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.349253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.349265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.359008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.359082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.359095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.359101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.359106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.359118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.369148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.369225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.369238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.369244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.369249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.369261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.379243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.379330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.379342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.379349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.379354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.379365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.389139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.389222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.389235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.389241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.389246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.389258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.399242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.399323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.399336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.399341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.399346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.399358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.409227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.409306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.409320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.409327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.409333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.409346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-25 17:09:35.419348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.159 [2024-07-25 17:09:35.419434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.159 [2024-07-25 17:09:35.419447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.159 [2024-07-25 17:09:35.419456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.159 [2024-07-25 17:09:35.419460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.159 [2024-07-25 17:09:35.419472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.429352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.429436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.429449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.429454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.422 [2024-07-25 17:09:35.429459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.422 [2024-07-25 17:09:35.429471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.422 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.439385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.439487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.439500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.439505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.422 [2024-07-25 17:09:35.439510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.422 [2024-07-25 17:09:35.439521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.422 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.449398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.449479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.449492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.449497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.422 [2024-07-25 17:09:35.449501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.422 [2024-07-25 17:09:35.449513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.422 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.459395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.459489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.459502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.459508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.422 [2024-07-25 17:09:35.459512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.422 [2024-07-25 17:09:35.459524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.422 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.469319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.469405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.469417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.469423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.422 [2024-07-25 17:09:35.469427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.422 [2024-07-25 17:09:35.469440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.422 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.479445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.479529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.479541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.479547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.422 [2024-07-25 17:09:35.479552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.422 [2024-07-25 17:09:35.479564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.422 qpair failed and we were unable to recover it. 00:30:15.422 [2024-07-25 17:09:35.489496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.422 [2024-07-25 17:09:35.489576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.422 [2024-07-25 17:09:35.489588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.422 [2024-07-25 17:09:35.489594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.489599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.489610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.499728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.499815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.499828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.499835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.499840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.499852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.509570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.509660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.509683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.509690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.509695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.509711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.519575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.519651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.519664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.519670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.519675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.519689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.529462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.529567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.529581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.529586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.529591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.529603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.539669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.539760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.539773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.539779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.539784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.539796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.549652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.549741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.549754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.549760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.549765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.549780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.559697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.559783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.559803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.559810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.559815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.559831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.569669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.569752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.569771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.569778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.569783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.569800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.579772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.579862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.579882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.579889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.579894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.579910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.589765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.589855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.589875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.589882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.589887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.589902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.599789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.599883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.599906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.599913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.599918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.599934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.609819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.609905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.609924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.423 [2024-07-25 17:09:35.609931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.423 [2024-07-25 17:09:35.609936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.423 [2024-07-25 17:09:35.609952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.423 qpair failed and we were unable to recover it. 00:30:15.423 [2024-07-25 17:09:35.619878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.423 [2024-07-25 17:09:35.620011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.423 [2024-07-25 17:09:35.620030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.620037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.620042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.620057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.629862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.629949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.629968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.629975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.629980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.629996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.639916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.639997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.640016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.640024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.640032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.640049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.649903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.649986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.650005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.650012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.650017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.650032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.659983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.660073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.660092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.660099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.660103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.660119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.669848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.669937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.669951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.669958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.669962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.669975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.679965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.680041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.680052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.680057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.680062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.680073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.424 [2024-07-25 17:09:35.690051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.424 [2024-07-25 17:09:35.690135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.424 [2024-07-25 17:09:35.690154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.424 [2024-07-25 17:09:35.690162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.424 [2024-07-25 17:09:35.690167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.424 [2024-07-25 17:09:35.690182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.424 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.700115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.687 [2024-07-25 17:09:35.700206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.687 [2024-07-25 17:09:35.700220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.687 [2024-07-25 17:09:35.700226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.687 [2024-07-25 17:09:35.700230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.687 [2024-07-25 17:09:35.700243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.687 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.710050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.687 [2024-07-25 17:09:35.710137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.687 [2024-07-25 17:09:35.710150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.687 [2024-07-25 17:09:35.710155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.687 [2024-07-25 17:09:35.710161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.687 [2024-07-25 17:09:35.710173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.687 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.720052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.687 [2024-07-25 17:09:35.720129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.687 [2024-07-25 17:09:35.720142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.687 [2024-07-25 17:09:35.720148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.687 [2024-07-25 17:09:35.720152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.687 [2024-07-25 17:09:35.720164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.687 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.730113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.687 [2024-07-25 17:09:35.730190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.687 [2024-07-25 17:09:35.730206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.687 [2024-07-25 17:09:35.730212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.687 [2024-07-25 17:09:35.730220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.687 [2024-07-25 17:09:35.730232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.687 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.740196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.687 [2024-07-25 17:09:35.740296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.687 [2024-07-25 17:09:35.740310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.687 [2024-07-25 17:09:35.740315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.687 [2024-07-25 17:09:35.740320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.687 [2024-07-25 17:09:35.740332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.687 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.750108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.687 [2024-07-25 17:09:35.750225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.687 [2024-07-25 17:09:35.750239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.687 [2024-07-25 17:09:35.750245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.687 [2024-07-25 17:09:35.750251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.687 [2024-07-25 17:09:35.750265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.687 qpair failed and we were unable to recover it. 00:30:15.687 [2024-07-25 17:09:35.760227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.760303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.760316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.760322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.760326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.760338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.770215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.770292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.770306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.770311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.770316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.770328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.780312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.780397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.780411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.780416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.780421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.780434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.790468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.790547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.790560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.790565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.790570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.790583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.800298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.800378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.800390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.800396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.800401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.800413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.810211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.810295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.810308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.810314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.810319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.810331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.820401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.820492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.820505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.820514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.820519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.820531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.830420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.830504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.830516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.830522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.830527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.830539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.840443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.840522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.840535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.840540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.840546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.840557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.850456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.850550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.850563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.850569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.850574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.850586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.860395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.860482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.860495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.860501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.860506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.860518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.870541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.870625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.870638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.870644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.870649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.870661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.688 [2024-07-25 17:09:35.880509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-07-25 17:09:35.880604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-07-25 17:09:35.880617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-07-25 17:09:35.880623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-07-25 17:09:35.880627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.688 [2024-07-25 17:09:35.880639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.688 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.890565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.890638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.890651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.890656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.890661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.890673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.900594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.900673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.900685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.900690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.900695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.900707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.910655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.910764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.910784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.910789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.910794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.910806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.920651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.920759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.920779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.920786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.920790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.920806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.930623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.930698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.930712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.930718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.930723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.930735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.940729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.940815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.940829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.940834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.940839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.940851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-07-25 17:09:35.950791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-07-25 17:09:35.950902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-07-25 17:09:35.950915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-07-25 17:09:35.950921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-07-25 17:09:35.950925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.689 [2024-07-25 17:09:35.950940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.952 [2024-07-25 17:09:35.960758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.952 [2024-07-25 17:09:35.960837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.952 [2024-07-25 17:09:35.960849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.952 [2024-07-25 17:09:35.960855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.952 [2024-07-25 17:09:35.960861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.952 [2024-07-25 17:09:35.960872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.952 qpair failed and we were unable to recover it. 00:30:15.952 [2024-07-25 17:09:35.970746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.952 [2024-07-25 17:09:35.970820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.952 [2024-07-25 17:09:35.970833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.952 [2024-07-25 17:09:35.970838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.952 [2024-07-25 17:09:35.970843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.952 [2024-07-25 17:09:35.970855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.952 qpair failed and we were unable to recover it. 00:30:15.952 [2024-07-25 17:09:35.980806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.952 [2024-07-25 17:09:35.980891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.952 [2024-07-25 17:09:35.980911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.952 [2024-07-25 17:09:35.980917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.952 [2024-07-25 17:09:35.980923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.952 [2024-07-25 17:09:35.980939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.952 qpair failed and we were unable to recover it. 00:30:15.952 [2024-07-25 17:09:35.990844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.952 [2024-07-25 17:09:35.990922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.952 [2024-07-25 17:09:35.990936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.952 [2024-07-25 17:09:35.990942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.952 [2024-07-25 17:09:35.990947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:35.990960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.000868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.000964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.000982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.000989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.000994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.001007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.010906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.010984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.011004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.011011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.011017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.011032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.020918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.020998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.021017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.021025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.021030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.021046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.030909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.030993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.031013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.031020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.031025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.031040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.040910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.040985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.040999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.041005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.041010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.041027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.050965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.051083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.051103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.051109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.051114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.051130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.061018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.061095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.061109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.061115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.061121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.061133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.071042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.071118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.071131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.071137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.071142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.071154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.080924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.081000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.081012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.081018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.081022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.081034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.091072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.091155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.091168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.091174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.091179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.091191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.101123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.101250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.101264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.101270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.101275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.101288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.111114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.111191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.111208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.111214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.111218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.111230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.953 [2024-07-25 17:09:36.121183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.953 [2024-07-25 17:09:36.121266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.953 [2024-07-25 17:09:36.121279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.953 [2024-07-25 17:09:36.121285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.953 [2024-07-25 17:09:36.121290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.953 [2024-07-25 17:09:36.121302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.953 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.131060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.131142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.131154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.131160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.131168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.131180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.141305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.141382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.141395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.141401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.141405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.141417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.151229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.151311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.151325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.151331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.151336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.151348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.161289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.161370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.161382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.161388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.161393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.161405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.171280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.171353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.171365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.171371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.171375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.171387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.181394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.181505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.181519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.181524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.181529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.181541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.191416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.191498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.191512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.191518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.191524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.191536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.201360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.201469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.201482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.201488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.201493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.201505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.211398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.211473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.211486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.211492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.211496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.211509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:15.954 [2024-07-25 17:09:36.221465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.954 [2024-07-25 17:09:36.221539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.954 [2024-07-25 17:09:36.221552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.954 [2024-07-25 17:09:36.221561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.954 [2024-07-25 17:09:36.221566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:15.954 [2024-07-25 17:09:36.221578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.954 qpair failed and we were unable to recover it. 00:30:16.217 [2024-07-25 17:09:36.231498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.217 [2024-07-25 17:09:36.231576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.217 [2024-07-25 17:09:36.231589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.217 [2024-07-25 17:09:36.231594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.217 [2024-07-25 17:09:36.231599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.217 [2024-07-25 17:09:36.231611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.217 qpair failed and we were unable to recover it. 00:30:16.217 [2024-07-25 17:09:36.241457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.217 [2024-07-25 17:09:36.241536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.217 [2024-07-25 17:09:36.241549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.217 [2024-07-25 17:09:36.241554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.217 [2024-07-25 17:09:36.241559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.217 [2024-07-25 17:09:36.241570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.217 qpair failed and we were unable to recover it. 00:30:16.217 [2024-07-25 17:09:36.251502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.217 [2024-07-25 17:09:36.251578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.217 [2024-07-25 17:09:36.251591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.217 [2024-07-25 17:09:36.251596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.217 [2024-07-25 17:09:36.251603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.217 [2024-07-25 17:09:36.251614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.217 qpair failed and we were unable to recover it. 00:30:16.217 [2024-07-25 17:09:36.261555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.217 [2024-07-25 17:09:36.261655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.217 [2024-07-25 17:09:36.261669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.217 [2024-07-25 17:09:36.261675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.217 [2024-07-25 17:09:36.261679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.217 [2024-07-25 17:09:36.261691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.217 qpair failed and we were unable to recover it. 00:30:16.217 [2024-07-25 17:09:36.271544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.217 [2024-07-25 17:09:36.271626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.217 [2024-07-25 17:09:36.271639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.217 [2024-07-25 17:09:36.271645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.217 [2024-07-25 17:09:36.271650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.217 [2024-07-25 17:09:36.271662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.217 qpair failed and we were unable to recover it. 00:30:16.217 [2024-07-25 17:09:36.281603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.217 [2024-07-25 17:09:36.281680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.217 [2024-07-25 17:09:36.281692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.217 [2024-07-25 17:09:36.281698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.217 [2024-07-25 17:09:36.281702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.281714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.291642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.291722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.291741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.291748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.291753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.291769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.301636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.301715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.301729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.301735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.301740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.301753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.311689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.311807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.311824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.311829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.311834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.311846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.321676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.321751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.321764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.321770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.321774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.321786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.331739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.331812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.331825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.331831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.331836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.331848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.341673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.341781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.341794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.341800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.341805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.341816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.351801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.351911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.351931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.351938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.351943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.351962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.361789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.361906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.361926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.361932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.361937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.361953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.371817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.371899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.371918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.371925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.371930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.371946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.381929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.382051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.382071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.382077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.382082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.382098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.391925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.392052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.392066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.392072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.392077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.392089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.401929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.402009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.402026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.402033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.402037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.402050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.218 [2024-07-25 17:09:36.411962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.218 [2024-07-25 17:09:36.412038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.218 [2024-07-25 17:09:36.412051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.218 [2024-07-25 17:09:36.412056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.218 [2024-07-25 17:09:36.412061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.218 [2024-07-25 17:09:36.412074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.218 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.421912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.421988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.422001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.422006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.422011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.422023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.432038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.432119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.432138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.432146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.432151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.432167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.442046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.442124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.442138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.442143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.442149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.442165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.452067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.452139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.452153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.452158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.452163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.452175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.462096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.462173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.462186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.462191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.462196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.462211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.472115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.472195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.472211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.472218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.472222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.472234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.219 [2024-07-25 17:09:36.482189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.219 [2024-07-25 17:09:36.482309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.219 [2024-07-25 17:09:36.482322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.219 [2024-07-25 17:09:36.482328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.219 [2024-07-25 17:09:36.482332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.219 [2024-07-25 17:09:36.482344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.219 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.492135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.492212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.492229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.492235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.492239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.492251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.502210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.502288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.502301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.502306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.502311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.502324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.512393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.512472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.512485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.512491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.512496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.512508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.522267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.522351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.522364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.522369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.522374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.522386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.532270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.532351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.532363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.532369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.532376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.532388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.542286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.542409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.542423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.542429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.542434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.542446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.552308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.552390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.552403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.552409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.552414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.552426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.562376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.562498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.562512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.562518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.482 [2024-07-25 17:09:36.562522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.482 [2024-07-25 17:09:36.562534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.482 qpair failed and we were unable to recover it. 00:30:16.482 [2024-07-25 17:09:36.572359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.482 [2024-07-25 17:09:36.572433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.482 [2024-07-25 17:09:36.572446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.482 [2024-07-25 17:09:36.572452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.572457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.572469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.582443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.582527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.582540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.582546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.582550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.582562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.592426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.592511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.592524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.592529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.592535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.592546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.602471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.602549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.602562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.602567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.602573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.602585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.612499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.612578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.612591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.612597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.612602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.612614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.622518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.622622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.622635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.622644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.622648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.622660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.632512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.632593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.632606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.632611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.632616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.632628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.642598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.642671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.642684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.642690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.642694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.642705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.652592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.652671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.652684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.652689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.652695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.652707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.662657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.662757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.662771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.662776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.662781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.662792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.672679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.672764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.672777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.672782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.672788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.672800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.682669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.682743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.682755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.682761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.682765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.682777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.692712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.692791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.692810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.692818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.692823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.483 [2024-07-25 17:09:36.692839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.483 qpair failed and we were unable to recover it. 00:30:16.483 [2024-07-25 17:09:36.702729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.483 [2024-07-25 17:09:36.702808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.483 [2024-07-25 17:09:36.702822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.483 [2024-07-25 17:09:36.702829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.483 [2024-07-25 17:09:36.702835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.484 [2024-07-25 17:09:36.702847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.484 qpair failed and we were unable to recover it. 00:30:16.484 [2024-07-25 17:09:36.712751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.484 [2024-07-25 17:09:36.712839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.484 [2024-07-25 17:09:36.712858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.484 [2024-07-25 17:09:36.712868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.484 [2024-07-25 17:09:36.712873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.484 [2024-07-25 17:09:36.712889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.484 qpair failed and we were unable to recover it. 00:30:16.484 [2024-07-25 17:09:36.722776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.484 [2024-07-25 17:09:36.722859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.484 [2024-07-25 17:09:36.722879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.484 [2024-07-25 17:09:36.722886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.484 [2024-07-25 17:09:36.722892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.484 [2024-07-25 17:09:36.722907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.484 qpair failed and we were unable to recover it. 00:30:16.484 [2024-07-25 17:09:36.732742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.484 [2024-07-25 17:09:36.732820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.484 [2024-07-25 17:09:36.732839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.484 [2024-07-25 17:09:36.732846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.484 [2024-07-25 17:09:36.732850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.484 [2024-07-25 17:09:36.732866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.484 qpair failed and we were unable to recover it. 00:30:16.484 [2024-07-25 17:09:36.742867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.484 [2024-07-25 17:09:36.742949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.484 [2024-07-25 17:09:36.742968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.484 [2024-07-25 17:09:36.742975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.484 [2024-07-25 17:09:36.742980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.484 [2024-07-25 17:09:36.742995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.484 qpair failed and we were unable to recover it. 00:30:16.484 [2024-07-25 17:09:36.752857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.484 [2024-07-25 17:09:36.752944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.484 [2024-07-25 17:09:36.752964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.484 [2024-07-25 17:09:36.752970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.484 [2024-07-25 17:09:36.752975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.484 [2024-07-25 17:09:36.752991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.484 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.762964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.763071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.763085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.763091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.763096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.747 [2024-07-25 17:09:36.763109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.772934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.773018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.773037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.773044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.773049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.747 [2024-07-25 17:09:36.773064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.782940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.783019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.783033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.783038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.783043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.747 [2024-07-25 17:09:36.783055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.792990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.793066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.793079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.793085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.793090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.747 [2024-07-25 17:09:36.793102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.803041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.803119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.803135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.803141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.803146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.747 [2024-07-25 17:09:36.803157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.813053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.813129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.813142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.813147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.813152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.747 [2024-07-25 17:09:36.813163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-07-25 17:09:36.823104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.747 [2024-07-25 17:09:36.823183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.747 [2024-07-25 17:09:36.823196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.747 [2024-07-25 17:09:36.823205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.747 [2024-07-25 17:09:36.823209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.823221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.833103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.833184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.833196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.833206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.833211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.833222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.843085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.843160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.843172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.843178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.843182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.843197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.853103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.853177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.853190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.853195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.853204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.853216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.863193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.863302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.863315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.863320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.863325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.863337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.873067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.873147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.873159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.873165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.873170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.873181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.883225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.883302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.883315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.883321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.883325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.883337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.893220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.893303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.893318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.893324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.893329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.893341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.903251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.903332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.903345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.903350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.903356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.903368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.913284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.913400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.913414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.913420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.913425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.913436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.923334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.923407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.923420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.923426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.923430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.923442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.933333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.933411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.933423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.933429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.933436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.933448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.943402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.943478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.943490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.943496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.943500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.943512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.953388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.748 [2024-07-25 17:09:36.953469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.748 [2024-07-25 17:09:36.953482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.748 [2024-07-25 17:09:36.953487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.748 [2024-07-25 17:09:36.953492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.748 [2024-07-25 17:09:36.953503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-07-25 17:09:36.963441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.749 [2024-07-25 17:09:36.963512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.749 [2024-07-25 17:09:36.963524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.749 [2024-07-25 17:09:36.963530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.749 [2024-07-25 17:09:36.963534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.749 [2024-07-25 17:09:36.963546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.749 qpair failed and we were unable to recover it. 00:30:16.749 [2024-07-25 17:09:36.973444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.749 [2024-07-25 17:09:36.973518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.749 [2024-07-25 17:09:36.973530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.749 [2024-07-25 17:09:36.973536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.749 [2024-07-25 17:09:36.973541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.749 [2024-07-25 17:09:36.973552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.749 qpair failed and we were unable to recover it. 00:30:16.749 [2024-07-25 17:09:36.983477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.749 [2024-07-25 17:09:36.983561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.749 [2024-07-25 17:09:36.983573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.749 [2024-07-25 17:09:36.983579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.749 [2024-07-25 17:09:36.983583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.749 [2024-07-25 17:09:36.983595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.749 qpair failed and we were unable to recover it. 00:30:16.749 [2024-07-25 17:09:36.993528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.749 [2024-07-25 17:09:36.993611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.749 [2024-07-25 17:09:36.993624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.749 [2024-07-25 17:09:36.993629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.749 [2024-07-25 17:09:36.993633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.749 [2024-07-25 17:09:36.993645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.749 qpair failed and we were unable to recover it. 00:30:16.749 [2024-07-25 17:09:37.003415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.749 [2024-07-25 17:09:37.003489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.749 [2024-07-25 17:09:37.003502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.749 [2024-07-25 17:09:37.003508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.749 [2024-07-25 17:09:37.003512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.749 [2024-07-25 17:09:37.003524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.749 qpair failed and we were unable to recover it. 00:30:16.749 [2024-07-25 17:09:37.013555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.749 [2024-07-25 17:09:37.013630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.749 [2024-07-25 17:09:37.013642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.749 [2024-07-25 17:09:37.013648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.749 [2024-07-25 17:09:37.013653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:16.749 [2024-07-25 17:09:37.013665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.749 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.023596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.023676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.023688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.023698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.023703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.023715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.033641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.033726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.033739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.033746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.033750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.033762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.043639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.043715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.043727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.043733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.043738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.043750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.053662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.053739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.053752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.053757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.053762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.053774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.063743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.063827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.063846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.063853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.063858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.063873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.073728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.073813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.073832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.073839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.073844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.073859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.083779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.083858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.083877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.083883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.083888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.083904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.093764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.093841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.093857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.093863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.093868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.093881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.103844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.103926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.103945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.012 [2024-07-25 17:09:37.103951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.012 [2024-07-25 17:09:37.103956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.012 [2024-07-25 17:09:37.103972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-07-25 17:09:37.113866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.012 [2024-07-25 17:09:37.113952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.012 [2024-07-25 17:09:37.113971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.113981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.113986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.114002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.123892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.123971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.123990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.123996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.124001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.124017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.133918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.134006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.134025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.134032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.134037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.134052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.143941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.144027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.144046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.144053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.144058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.144073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.153932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.154014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.154028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.154033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.154038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.154051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.163998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.164075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.164088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.164094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.164099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.164111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.173889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.173962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.173975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.173980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.173985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.173997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.184050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.184127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.184140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.184145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.184150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.184163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.194080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.194158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.194170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.194176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.194180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.194193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.204017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.204099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.204114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.204120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.204125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.204136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.214095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.214175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.214188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.214195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.214199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.214215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.224159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.224238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.224251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.224256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.224261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.224273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.234153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.234233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.234246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.234251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.234256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.013 [2024-07-25 17:09:37.234267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-07-25 17:09:37.244166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.013 [2024-07-25 17:09:37.244243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.013 [2024-07-25 17:09:37.244256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.013 [2024-07-25 17:09:37.244261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.013 [2024-07-25 17:09:37.244266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.014 [2024-07-25 17:09:37.244281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-07-25 17:09:37.254171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.014 [2024-07-25 17:09:37.254249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.014 [2024-07-25 17:09:37.254262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.014 [2024-07-25 17:09:37.254267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.014 [2024-07-25 17:09:37.254272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dcc000b90 00:30:17.014 [2024-07-25 17:09:37.254283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-07-25 17:09:37.264375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.014 [2024-07-25 17:09:37.264506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.014 [2024-07-25 17:09:37.264573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.014 [2024-07-25 17:09:37.264599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.014 [2024-07-25 17:09:37.264618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dd4000b90 00:30:17.014 [2024-07-25 17:09:37.264672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-07-25 17:09:37.274411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.014 [2024-07-25 17:09:37.274598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.014 [2024-07-25 17:09:37.274632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.014 [2024-07-25 17:09:37.274648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.014 [2024-07-25 17:09:37.274662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dd4000b90 00:30:17.014 [2024-07-25 17:09:37.274695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Write completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 Read completed with error (sct=0, sc=8) 00:30:17.014 starting I/O failed 00:30:17.014 [2024-07-25 17:09:37.275552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.276 [2024-07-25 17:09:37.284488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.276 [2024-07-25 17:09:37.284858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.276 [2024-07-25 17:09:37.284921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.276 [2024-07-25 17:09:37.284946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.276 [2024-07-25 17:09:37.284966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dc4000b90 00:30:17.276 [2024-07-25 17:09:37.285016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.276 qpair failed and we were unable to recover it. 00:30:17.276 [2024-07-25 17:09:37.294376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.276 [2024-07-25 17:09:37.294473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.276 [2024-07-25 17:09:37.294506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.276 [2024-07-25 17:09:37.294522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.276 [2024-07-25 17:09:37.294536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5dc4000b90 00:30:17.276 [2024-07-25 17:09:37.294567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.276 qpair failed and we were unable to recover it. 00:30:17.276 [2024-07-25 17:09:37.304404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.276 [2024-07-25 17:09:37.304512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.276 [2024-07-25 17:09:37.304538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.276 [2024-07-25 17:09:37.304547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.276 [2024-07-25 17:09:37.304554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10bd220 00:30:17.276 [2024-07-25 17:09:37.304575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.276 qpair failed and we were unable to recover it. 00:30:17.276 [2024-07-25 17:09:37.314431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.276 [2024-07-25 17:09:37.314534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.276 [2024-07-25 17:09:37.314553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.276 [2024-07-25 17:09:37.314565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.276 [2024-07-25 17:09:37.314573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10bd220 00:30:17.276 [2024-07-25 17:09:37.314589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.276 qpair failed and we were unable to recover it. 00:30:17.276 [2024-07-25 17:09:37.314751] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:17.276 A controller has encountered a failure and is being reset. 00:30:17.276 [2024-07-25 17:09:37.314862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10caf20 (9): Bad file descriptor 00:30:17.276 Controller properly reset. 00:30:17.276 Initializing NVMe Controllers 00:30:17.276 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:17.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:17.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:17.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:17.276 Initialization complete. Launching workers. 00:30:17.276 Starting thread on core 1 00:30:17.276 Starting thread on core 2 00:30:17.276 Starting thread on core 3 00:30:17.276 Starting thread on core 0 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:17.276 00:30:17.276 real 0m11.395s 00:30:17.276 user 0m19.911s 00:30:17.276 sys 0m4.461s 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.276 ************************************ 00:30:17.276 END TEST nvmf_target_disconnect_tc2 00:30:17.276 ************************************ 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:17.276 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:17.277 rmmod nvme_tcp 00:30:17.277 rmmod nvme_fabrics 00:30:17.277 rmmod nvme_keyring 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1619206 ']' 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1619206 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1619206 ']' 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1619206 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1619206 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1619206' 00:30:17.277 killing process with pid 1619206 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1619206 00:30:17.277 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1619206 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.538 17:09:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.455 17:09:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:19.455 00:30:19.455 real 0m21.176s 00:30:19.455 user 0m47.572s 00:30:19.455 sys 0m10.115s 00:30:19.455 17:09:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.455 17:09:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:19.455 ************************************ 00:30:19.455 END TEST nvmf_target_disconnect 00:30:19.455 ************************************ 00:30:19.716 17:09:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:19.716 00:30:19.716 real 6m16.264s 00:30:19.716 user 11m4.742s 00:30:19.716 sys 2m6.184s 00:30:19.716 17:09:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.716 17:09:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.716 ************************************ 00:30:19.716 END TEST nvmf_host 00:30:19.716 ************************************ 00:30:19.716 00:30:19.716 real 22m39.835s 00:30:19.716 user 47m15.032s 00:30:19.716 sys 7m14.761s 00:30:19.716 17:09:39 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.716 17:09:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.716 ************************************ 00:30:19.716 END TEST nvmf_tcp 00:30:19.716 ************************************ 00:30:19.716 17:09:39 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:30:19.716 17:09:39 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.716 17:09:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:19.716 17:09:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.716 17:09:39 -- common/autotest_common.sh@10 -- # set +x 00:30:19.716 ************************************ 00:30:19.716 START TEST spdkcli_nvmf_tcp 00:30:19.717 ************************************ 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.717 * Looking for test storage... 00:30:19.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.717 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.978 17:09:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:19.979 17:09:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1621168 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1621168 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1621168 ']' 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.979 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.979 [2024-07-25 17:09:40.060386] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:30:19.979 [2024-07-25 17:09:40.060458] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621168 ] 00:30:19.979 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.979 [2024-07-25 17:09:40.128806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.979 [2024-07-25 17:09:40.205317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.979 [2024-07-25 17:09:40.205516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:20.924 17:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:20.924 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:20.924 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:20.924 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:20.924 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:20.924 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:20.924 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:20.924 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.924 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.924 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:20.924 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:20.924 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:20.924 ' 00:30:23.473 [2024-07-25 17:09:43.198843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.416 [2024-07-25 17:09:44.366758] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:26.333 [2024-07-25 17:09:46.505012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:28.248 [2024-07-25 17:09:48.346523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:29.631 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:29.631 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:29.631 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:29.631 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:29.631 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:29.631 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:29.631 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:29.631 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.631 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.631 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:29.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:29.631 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:29.631 17:09:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:29.631 17:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.631 17:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.892 17:09:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:29.892 17:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:29.892 17:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.892 17:09:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:29.892 17:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.154 17:09:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:30.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:30.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:30.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:30.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:30.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:30.154 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:30.154 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:30.154 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:30.154 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:30.154 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:30.154 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:30.154 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:30.154 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:30.154 ' 00:30:35.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:35.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:35.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:35.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:35.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:35.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:35.443 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:35.443 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:35.443 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:35.443 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:35.443 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:35.443 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:35.443 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:35.443 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1621168 ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621168' 00:30:35.443 killing process with pid 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1621168 ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1621168 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1621168 ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1621168 00:30:35.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1621168) - No such process 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1621168 is not found' 00:30:35.443 Process with pid 1621168 is not found 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:35.443 00:30:35.443 real 0m15.618s 00:30:35.443 user 0m32.184s 00:30:35.443 sys 0m0.727s 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:35.443 17:09:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.443 ************************************ 00:30:35.443 END TEST spdkcli_nvmf_tcp 00:30:35.443 ************************************ 00:30:35.443 17:09:55 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.443 17:09:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:35.443 17:09:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:35.443 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:30:35.443 ************************************ 00:30:35.443 START TEST nvmf_identify_passthru 00:30:35.443 ************************************ 00:30:35.443 17:09:55 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.443 * Looking for test storage... 00:30:35.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.443 17:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.443 17:09:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.443 17:09:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.443 17:09:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.443 17:09:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.443 17:09:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.443 17:09:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.443 17:09:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:35.443 17:09:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.443 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:35.444 17:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.444 17:09:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.444 17:09:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.444 17:09:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.444 17:09:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.444 17:09:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.444 17:09:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.444 17:09:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:35.444 17:09:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.444 17:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.444 17:09:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:35.444 17:09:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:35.444 17:09:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:35.444 17:09:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:43.629 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:43.629 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:43.629 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.629 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:43.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:43.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:30:43.630 00:30:43.630 --- 10.0.0.2 ping statistics --- 00:30:43.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.630 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:30:43.630 00:30:43.630 --- 10.0.0.1 ping statistics --- 00:30:43.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.630 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:43.630 17:10:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:43.630 17:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:43.630 17:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:43.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:43.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1627972 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:43.630 17:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1627972 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1627972 ']' 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.630 17:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.891 [2024-07-25 17:10:03.909849] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:30:43.891 [2024-07-25 17:10:03.909940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.891 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.891 [2024-07-25 17:10:03.980564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.891 [2024-07-25 17:10:04.045501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.891 [2024-07-25 17:10:04.045539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.891 [2024-07-25 17:10:04.045550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.891 [2024-07-25 17:10:04.045557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.891 [2024-07-25 17:10:04.045562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.891 [2024-07-25 17:10:04.045700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.891 [2024-07-25 17:10:04.045814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.891 [2024-07-25 17:10:04.045970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.891 [2024-07-25 17:10:04.045971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:30:44.464 17:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.464 INFO: Log level set to 20 00:30:44.464 INFO: Requests: 00:30:44.464 { 00:30:44.464 "jsonrpc": "2.0", 00:30:44.464 "method": "nvmf_set_config", 00:30:44.464 "id": 1, 00:30:44.464 "params": { 00:30:44.464 "admin_cmd_passthru": { 00:30:44.464 "identify_ctrlr": true 00:30:44.464 } 00:30:44.464 } 00:30:44.464 } 00:30:44.464 00:30:44.464 INFO: response: 00:30:44.464 { 00:30:44.464 "jsonrpc": "2.0", 00:30:44.464 "id": 1, 00:30:44.464 "result": true 00:30:44.464 } 00:30:44.464 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.464 17:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.464 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.464 INFO: Setting log level to 20 00:30:44.464 INFO: Setting log level to 20 00:30:44.464 INFO: Log level set to 20 00:30:44.464 INFO: Log level set to 20 00:30:44.464 INFO: Requests: 00:30:44.464 { 00:30:44.464 "jsonrpc": "2.0", 00:30:44.464 "method": "framework_start_init", 00:30:44.464 "id": 1 00:30:44.464 } 00:30:44.464 00:30:44.464 INFO: Requests: 00:30:44.464 { 00:30:44.464 "jsonrpc": "2.0", 00:30:44.464 "method": "framework_start_init", 00:30:44.464 "id": 1 00:30:44.464 } 00:30:44.464 00:30:44.725 [2024-07-25 17:10:04.761624] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:44.725 INFO: response: 00:30:44.725 { 00:30:44.725 "jsonrpc": "2.0", 00:30:44.725 "id": 1, 00:30:44.725 "result": true 00:30:44.725 } 00:30:44.725 00:30:44.725 INFO: response: 00:30:44.725 { 00:30:44.725 "jsonrpc": "2.0", 00:30:44.725 "id": 1, 00:30:44.725 "result": true 00:30:44.725 } 00:30:44.725 00:30:44.725 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.725 17:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.725 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.725 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.726 INFO: Setting log level to 40 00:30:44.726 INFO: Setting log level to 40 00:30:44.726 INFO: Setting log level to 40 00:30:44.726 [2024-07-25 17:10:04.774950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.726 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.726 17:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:44.726 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:44.726 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.726 17:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:44.726 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.726 17:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.986 Nvme0n1 00:30:44.986 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.986 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:44.986 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.986 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.986 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.987 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.987 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.987 [2024-07-25 17:10:05.157506] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.987 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.987 [ 00:30:44.987 { 00:30:44.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:44.987 "subtype": "Discovery", 00:30:44.987 "listen_addresses": [], 00:30:44.987 "allow_any_host": true, 00:30:44.987 "hosts": [] 00:30:44.987 }, 00:30:44.987 { 00:30:44.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.987 "subtype": "NVMe", 00:30:44.987 "listen_addresses": [ 00:30:44.987 { 00:30:44.987 "trtype": "TCP", 00:30:44.987 "adrfam": "IPv4", 00:30:44.987 "traddr": "10.0.0.2", 00:30:44.987 "trsvcid": "4420" 00:30:44.987 } 00:30:44.987 ], 00:30:44.987 "allow_any_host": true, 00:30:44.987 "hosts": [], 00:30:44.987 "serial_number": "SPDK00000000000001", 00:30:44.987 "model_number": "SPDK bdev Controller", 00:30:44.987 "max_namespaces": 1, 00:30:44.987 "min_cntlid": 1, 00:30:44.987 "max_cntlid": 65519, 00:30:44.987 "namespaces": [ 00:30:44.987 { 00:30:44.987 "nsid": 1, 00:30:44.987 "bdev_name": "Nvme0n1", 00:30:44.987 "name": "Nvme0n1", 00:30:44.987 "nguid": "36344730526054870025384500000044", 00:30:44.987 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:44.987 } 00:30:44.987 ] 00:30:44.987 } 00:30:44.987 ] 00:30:44.987 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.987 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:44.987 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:44.987 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:44.987 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.249 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:45.249 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:45.249 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:45.249 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:45.249 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.511 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:45.511 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.511 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:45.511 17:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:45.511 rmmod nvme_tcp 00:30:45.511 rmmod nvme_fabrics 00:30:45.511 rmmod nvme_keyring 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1627972 ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1627972 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1627972 ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1627972 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1627972 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1627972' 00:30:45.511 killing process with pid 1627972 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1627972 00:30:45.511 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1627972 00:30:45.773 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:45.773 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:45.773 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:45.773 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:45.773 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:45.773 17:10:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.773 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:45.773 17:10:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.321 17:10:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:48.321 00:30:48.321 real 0m12.502s 00:30:48.321 user 0m9.964s 00:30:48.321 sys 0m6.020s 00:30:48.321 17:10:08 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:48.321 17:10:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.321 ************************************ 00:30:48.321 END TEST nvmf_identify_passthru 00:30:48.321 ************************************ 00:30:48.321 17:10:08 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:48.321 17:10:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:48.321 17:10:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:48.321 17:10:08 -- common/autotest_common.sh@10 -- # set +x 00:30:48.321 ************************************ 00:30:48.321 START TEST nvmf_dif 00:30:48.321 ************************************ 00:30:48.321 17:10:08 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:48.321 * Looking for test storage... 00:30:48.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.321 17:10:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.321 17:10:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.321 17:10:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.321 17:10:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.321 17:10:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.321 17:10:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.321 17:10:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.321 17:10:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:48.321 17:10:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.321 17:10:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.322 17:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:48.322 17:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:48.322 17:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:48.322 17:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:48.322 17:10:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.322 17:10:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:48.322 17:10:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.322 17:10:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.322 17:10:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:54.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:54.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:54.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:54.912 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.912 17:10:14 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:54.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:30:54.912 00:30:54.912 --- 10.0.0.2 ping statistics --- 00:30:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.913 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:30:54.913 17:10:14 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:54.913 00:30:54.913 --- 10.0.0.1 ping statistics --- 00:30:54.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.913 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:54.913 17:10:14 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.913 17:10:14 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:54.913 17:10:14 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:54.913 17:10:14 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:57.459 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:57.459 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:57.459 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:57.459 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:57.720 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:57.720 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:57.982 17:10:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:57.982 17:10:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1633811 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1633811 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1633811 ']' 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.982 17:10:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:57.982 17:10:18 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:57.982 [2024-07-25 17:10:18.215062] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:30:57.982 [2024-07-25 17:10:18.215122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.982 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.244 [2024-07-25 17:10:18.286473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.244 [2024-07-25 17:10:18.359067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.244 [2024-07-25 17:10:18.359105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.244 [2024-07-25 17:10:18.359112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.244 [2024-07-25 17:10:18.359119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.244 [2024-07-25 17:10:18.359124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.244 [2024-07-25 17:10:18.359145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.816 17:10:18 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:58.816 17:10:18 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:58.816 17:10:18 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:58.816 17:10:18 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.816 17:10:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 17:10:19 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.816 17:10:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:58.816 17:10:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:58.816 17:10:19 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.816 17:10:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 [2024-07-25 17:10:19.017970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.816 17:10:19 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.816 17:10:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:58.816 17:10:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:58.816 17:10:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:58.816 17:10:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 ************************************ 00:30:58.816 START TEST fio_dif_1_default 00:30:58.816 ************************************ 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 bdev_null0 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.816 [2024-07-25 17:10:19.082277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.816 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:58.817 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:59.077 { 00:30:59.077 "params": { 00:30:59.077 "name": "Nvme$subsystem", 00:30:59.077 "trtype": "$TEST_TRANSPORT", 00:30:59.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.077 "adrfam": "ipv4", 00:30:59.077 "trsvcid": "$NVMF_PORT", 00:30:59.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.077 "hdgst": ${hdgst:-false}, 00:30:59.077 "ddgst": ${ddgst:-false} 00:30:59.077 }, 00:30:59.077 "method": "bdev_nvme_attach_controller" 00:30:59.077 } 00:30:59.077 EOF 00:30:59.077 )") 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:59.077 "params": { 00:30:59.077 "name": "Nvme0", 00:30:59.077 "trtype": "tcp", 00:30:59.077 "traddr": "10.0.0.2", 00:30:59.077 "adrfam": "ipv4", 00:30:59.077 "trsvcid": "4420", 00:30:59.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.077 "hdgst": false, 00:30:59.077 "ddgst": false 00:30:59.077 }, 00:30:59.077 "method": "bdev_nvme_attach_controller" 00:30:59.077 }' 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:59.077 17:10:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:59.338 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:59.338 fio-3.35 00:30:59.338 Starting 1 thread 00:30:59.338 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.628 00:31:11.629 filename0: (groupid=0, jobs=1): err= 0: pid=1634337: Thu Jul 25 17:10:29 2024 00:31:11.629 read: IOPS=181, BW=726KiB/s (743kB/s)(7280KiB/10033msec) 00:31:11.629 slat (nsec): min=5373, max=32149, avg=6253.30, stdev=1434.64 00:31:11.629 clat (usec): min=1535, max=44080, avg=22031.67, stdev=20315.03 00:31:11.629 lat (usec): min=1540, max=44112, avg=22037.92, stdev=20315.03 00:31:11.629 clat percentiles (usec): 00:31:11.629 | 1.00th=[ 1582], 5.00th=[ 1631], 10.00th=[ 1647], 20.00th=[ 1680], 00:31:11.629 | 30.00th=[ 1680], 40.00th=[ 1713], 50.00th=[42206], 60.00th=[42206], 00:31:11.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:11.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:11.629 | 99.99th=[44303] 00:31:11.629 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=726.40, stdev=31.32, samples=20 00:31:11.629 iops : min= 176, max= 192, avg=181.60, stdev= 7.83, samples=20 00:31:11.629 lat (msec) : 2=49.89%, 50=50.11% 00:31:11.629 cpu : usr=95.17%, sys=4.64%, ctx=15, majf=0, minf=232 00:31:11.629 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.629 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.629 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:11.629 00:31:11.629 Run status group 0 (all jobs): 00:31:11.629 READ: bw=726KiB/s (743kB/s), 726KiB/s-726KiB/s (743kB/s-743kB/s), io=7280KiB (7455kB), run=10033-10033msec 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 00:31:11.629 real 0m11.079s 00:31:11.629 user 0m23.776s 00:31:11.629 sys 0m0.748s 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 ************************************ 00:31:11.629 END TEST fio_dif_1_default 00:31:11.629 ************************************ 00:31:11.629 17:10:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:11.629 17:10:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:11.629 17:10:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 ************************************ 00:31:11.629 START TEST fio_dif_1_multi_subsystems 00:31:11.629 ************************************ 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 bdev_null0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 [2024-07-25 17:10:30.233500] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 bdev_null1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.629 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.629 { 00:31:11.629 "params": { 00:31:11.629 "name": "Nvme$subsystem", 00:31:11.629 "trtype": "$TEST_TRANSPORT", 00:31:11.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.629 "adrfam": "ipv4", 00:31:11.629 "trsvcid": "$NVMF_PORT", 00:31:11.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.629 "hdgst": ${hdgst:-false}, 00:31:11.629 "ddgst": ${ddgst:-false} 00:31:11.629 }, 00:31:11.629 "method": "bdev_nvme_attach_controller" 00:31:11.629 } 00:31:11.629 EOF 00:31:11.629 )") 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.630 { 00:31:11.630 "params": { 00:31:11.630 "name": "Nvme$subsystem", 00:31:11.630 "trtype": "$TEST_TRANSPORT", 00:31:11.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.630 "adrfam": "ipv4", 00:31:11.630 "trsvcid": "$NVMF_PORT", 00:31:11.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.630 "hdgst": ${hdgst:-false}, 00:31:11.630 "ddgst": ${ddgst:-false} 00:31:11.630 }, 00:31:11.630 "method": "bdev_nvme_attach_controller" 00:31:11.630 } 00:31:11.630 EOF 00:31:11.630 )") 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:11.630 "params": { 00:31:11.630 "name": "Nvme0", 00:31:11.630 "trtype": "tcp", 00:31:11.630 "traddr": "10.0.0.2", 00:31:11.630 "adrfam": "ipv4", 00:31:11.630 "trsvcid": "4420", 00:31:11.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:11.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:11.630 "hdgst": false, 00:31:11.630 "ddgst": false 00:31:11.630 }, 00:31:11.630 "method": "bdev_nvme_attach_controller" 00:31:11.630 },{ 00:31:11.630 "params": { 00:31:11.630 "name": "Nvme1", 00:31:11.630 "trtype": "tcp", 00:31:11.630 "traddr": "10.0.0.2", 00:31:11.630 "adrfam": "ipv4", 00:31:11.630 "trsvcid": "4420", 00:31:11.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:11.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:11.630 "hdgst": false, 00:31:11.630 "ddgst": false 00:31:11.630 }, 00:31:11.630 "method": "bdev_nvme_attach_controller" 00:31:11.630 }' 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:11.630 17:10:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.630 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:11.630 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:11.630 fio-3.35 00:31:11.630 Starting 2 threads 00:31:11.630 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.634 00:31:21.634 filename0: (groupid=0, jobs=1): err= 0: pid=1636647: Thu Jul 25 17:10:41 2024 00:31:21.634 read: IOPS=172, BW=689KiB/s (706kB/s)(6912KiB/10029msec) 00:31:21.634 slat (nsec): min=5383, max=32541, avg=6247.49, stdev=1406.87 00:31:21.634 clat (usec): min=761, max=45530, avg=23197.35, stdev=20413.74 00:31:21.634 lat (usec): min=767, max=45562, avg=23203.60, stdev=20413.70 00:31:21.634 clat percentiles (usec): 00:31:21.634 | 1.00th=[ 783], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 840], 00:31:21.634 | 30.00th=[ 1004], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41157], 00:31:21.634 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:31:21.634 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:31:21.634 | 99.99th=[45351] 00:31:21.634 bw ( KiB/s): min= 544, max= 768, per=64.43%, avg=689.60, stdev=73.03, samples=20 00:31:21.634 iops : min= 136, max= 192, avg=172.40, stdev=18.26, samples=20 00:31:21.634 lat (usec) : 1000=29.63% 00:31:21.634 lat (msec) : 2=15.97%, 50=54.40% 00:31:21.634 cpu : usr=96.82%, sys=2.97%, ctx=18, majf=0, minf=88 00:31:21.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.634 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:21.634 filename1: (groupid=0, jobs=1): err= 0: pid=1636648: Thu Jul 25 17:10:41 2024 00:31:21.634 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:31:21.634 slat (nsec): min=5395, max=33913, avg=6627.56, stdev=1774.86 00:31:21.634 clat (usec): min=41755, max=43832, avg=41988.20, stdev=130.74 00:31:21.634 lat (usec): min=41761, max=43866, avg=41994.82, stdev=131.63 00:31:21.634 clat percentiles (usec): 00:31:21.634 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:21.634 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:21.634 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:21.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:31:21.634 | 99.99th=[43779] 00:31:21.634 bw ( KiB/s): min= 352, max= 384, per=35.54%, avg=380.80, stdev= 9.85, samples=20 00:31:21.634 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:21.634 lat (msec) : 50=100.00% 00:31:21.634 cpu : usr=96.80%, sys=2.99%, ctx=10, majf=0, minf=159 00:31:21.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.634 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:21.634 00:31:21.634 Run status group 0 (all jobs): 00:31:21.634 READ: bw=1069KiB/s (1095kB/s), 381KiB/s-689KiB/s (390kB/s-706kB/s), io=10.5MiB (11.0MB), run=10029-10040msec 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 00:31:21.634 real 0m11.439s 00:31:21.634 user 0m34.087s 00:31:21.634 sys 0m0.908s 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 ************************************ 00:31:21.634 END TEST fio_dif_1_multi_subsystems 00:31:21.634 ************************************ 00:31:21.634 17:10:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:21.634 17:10:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:21.634 17:10:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 ************************************ 00:31:21.634 START TEST fio_dif_rand_params 00:31:21.634 ************************************ 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 bdev_null0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.634 [2024-07-25 17:10:41.745767] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.634 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:21.635 { 00:31:21.635 "params": { 00:31:21.635 "name": "Nvme$subsystem", 00:31:21.635 "trtype": "$TEST_TRANSPORT", 00:31:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.635 "adrfam": "ipv4", 00:31:21.635 "trsvcid": "$NVMF_PORT", 00:31:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.635 "hdgst": ${hdgst:-false}, 00:31:21.635 "ddgst": ${ddgst:-false} 00:31:21.635 }, 00:31:21.635 "method": "bdev_nvme_attach_controller" 00:31:21.635 } 00:31:21.635 EOF 00:31:21.635 )") 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:21.635 "params": { 00:31:21.635 "name": "Nvme0", 00:31:21.635 "trtype": "tcp", 00:31:21.635 "traddr": "10.0.0.2", 00:31:21.635 "adrfam": "ipv4", 00:31:21.635 "trsvcid": "4420", 00:31:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.635 "hdgst": false, 00:31:21.635 "ddgst": false 00:31:21.635 }, 00:31:21.635 "method": "bdev_nvme_attach_controller" 00:31:21.635 }' 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:21.635 17:10:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.206 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:22.206 ... 00:31:22.206 fio-3.35 00:31:22.206 Starting 3 threads 00:31:22.206 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.493 00:31:27.493 filename0: (groupid=0, jobs=1): err= 0: pid=1639050: Thu Jul 25 17:10:47 2024 00:31:27.493 read: IOPS=111, BW=13.9MiB/s (14.6MB/s)(70.1MiB/5053msec) 00:31:27.493 slat (nsec): min=5406, max=34848, avg=8083.11, stdev=1563.28 00:31:27.493 clat (usec): min=7555, max=97873, avg=26929.20, stdev=21055.16 00:31:27.493 lat (usec): min=7563, max=97882, avg=26937.28, stdev=21055.23 00:31:27.493 clat percentiles (usec): 00:31:27.493 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10552], 00:31:27.493 | 30.00th=[11731], 40.00th=[13435], 50.00th=[14746], 60.00th=[16319], 00:31:27.493 | 70.00th=[51643], 80.00th=[54789], 90.00th=[56361], 95.00th=[57410], 00:31:27.493 | 99.00th=[59507], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:31:27.493 | 99.99th=[98042] 00:31:27.493 bw ( KiB/s): min=10752, max=22272, per=29.51%, avg=14284.80, stdev=3277.96, samples=10 00:31:27.493 iops : min= 84, max= 174, avg=111.60, stdev=25.61, samples=10 00:31:27.493 lat (msec) : 10=14.62%, 20=52.23%, 50=0.71%, 100=32.44% 00:31:27.493 cpu : usr=96.60%, sys=3.05%, ctx=9, majf=0, minf=81 00:31:27.493 IO depths : 1=4.8%, 2=95.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.493 issued rwts: total=561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.494 filename0: (groupid=0, jobs=1): err= 0: pid=1639051: Thu Jul 25 17:10:47 2024 00:31:27.494 read: IOPS=174, BW=21.8MiB/s (22.9MB/s)(110MiB/5048msec) 00:31:27.494 slat (nsec): min=5402, max=50755, avg=7740.65, stdev=2256.17 00:31:27.494 clat (usec): min=7116, max=92582, avg=17103.57, stdev=16490.69 00:31:27.494 lat (usec): min=7121, max=92589, avg=17111.31, stdev=16490.63 00:31:27.494 clat percentiles (usec): 00:31:27.494 | 1.00th=[ 7439], 5.00th=[ 7832], 10.00th=[ 7963], 20.00th=[ 8291], 00:31:27.494 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10290], 00:31:27.494 | 70.00th=[11207], 80.00th=[13173], 90.00th=[50594], 95.00th=[51643], 00:31:27.494 | 99.00th=[53740], 99.50th=[55313], 99.90th=[92799], 99.95th=[92799], 00:31:27.494 | 99.99th=[92799] 00:31:27.494 bw ( KiB/s): min=16896, max=33024, per=46.54%, avg=22528.00, stdev=5968.46, samples=10 00:31:27.494 iops : min= 132, max= 258, avg=176.00, stdev=46.63, samples=10 00:31:27.494 lat (msec) : 10=54.31%, 20=27.89%, 50=3.97%, 100=13.83% 00:31:27.494 cpu : usr=97.07%, sys=2.69%, ctx=8, majf=0, minf=136 00:31:27.494 IO depths : 1=6.2%, 2=93.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.494 issued rwts: total=882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.494 filename0: (groupid=0, jobs=1): err= 0: pid=1639052: Thu Jul 25 17:10:47 2024 00:31:27.494 read: IOPS=92, BW=11.6MiB/s (12.1MB/s)(58.5MiB/5050msec) 00:31:27.494 slat (nsec): min=5402, max=31335, avg=7703.84, stdev=1774.73 00:31:27.494 clat (usec): min=7653, max=95799, avg=32263.50, stdev=21201.98 00:31:27.494 lat (usec): min=7661, max=95806, avg=32271.20, stdev=21201.96 00:31:27.494 clat percentiles (usec): 00:31:27.494 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10945], 20.00th=[12780], 00:31:27.494 | 30.00th=[14222], 40.00th=[15664], 50.00th=[17171], 60.00th=[52691], 00:31:27.494 | 70.00th=[53740], 80.00th=[55313], 90.00th=[56361], 95.00th=[57410], 00:31:27.494 | 99.00th=[61604], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:31:27.494 | 99.99th=[95945] 00:31:27.494 bw ( KiB/s): min= 8448, max=14592, per=24.59%, avg=11904.00, stdev=2149.49, samples=10 00:31:27.494 iops : min= 66, max= 114, avg=93.00, stdev=16.79, samples=10 00:31:27.494 lat (msec) : 10=4.70%, 20=50.43%, 50=0.85%, 100=44.02% 00:31:27.494 cpu : usr=96.06%, sys=3.39%, ctx=187, majf=0, minf=79 00:31:27.494 IO depths : 1=8.3%, 2=91.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.494 issued rwts: total=468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.494 00:31:27.494 Run status group 0 (all jobs): 00:31:27.494 READ: bw=47.3MiB/s (49.6MB/s), 11.6MiB/s-21.8MiB/s (12.1MB/s-22.9MB/s), io=239MiB (250MB), run=5048-5053msec 00:31:27.755 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:27.755 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:27.755 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.755 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:27.755 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:27.755 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 bdev_null0 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 [2024-07-25 17:10:47.913782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 bdev_null1 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 bdev_null2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:27.756 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:27.757 { 00:31:27.757 "params": { 00:31:27.757 "name": "Nvme$subsystem", 00:31:27.757 "trtype": "$TEST_TRANSPORT", 00:31:27.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.757 "adrfam": "ipv4", 00:31:27.757 "trsvcid": "$NVMF_PORT", 00:31:27.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.757 "hdgst": ${hdgst:-false}, 00:31:27.757 "ddgst": ${ddgst:-false} 00:31:27.757 }, 00:31:27.757 "method": "bdev_nvme_attach_controller" 00:31:27.757 } 00:31:27.757 EOF 00:31:27.757 )") 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:27.757 { 00:31:27.757 "params": { 00:31:27.757 "name": "Nvme$subsystem", 00:31:27.757 "trtype": "$TEST_TRANSPORT", 00:31:27.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.757 "adrfam": "ipv4", 00:31:27.757 "trsvcid": "$NVMF_PORT", 00:31:27.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.757 "hdgst": ${hdgst:-false}, 00:31:27.757 "ddgst": ${ddgst:-false} 00:31:27.757 }, 00:31:27.757 "method": "bdev_nvme_attach_controller" 00:31:27.757 } 00:31:27.757 EOF 00:31:27.757 )") 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:27.757 17:10:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:27.757 { 00:31:27.757 "params": { 00:31:27.757 "name": "Nvme$subsystem", 00:31:27.757 "trtype": "$TEST_TRANSPORT", 00:31:27.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.757 "adrfam": "ipv4", 00:31:27.757 "trsvcid": "$NVMF_PORT", 00:31:27.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.757 "hdgst": ${hdgst:-false}, 00:31:27.757 "ddgst": ${ddgst:-false} 00:31:27.757 }, 00:31:27.757 "method": "bdev_nvme_attach_controller" 00:31:27.757 } 00:31:27.757 EOF 00:31:27.757 )") 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:27.757 "params": { 00:31:27.757 "name": "Nvme0", 00:31:27.757 "trtype": "tcp", 00:31:27.757 "traddr": "10.0.0.2", 00:31:27.757 "adrfam": "ipv4", 00:31:27.757 "trsvcid": "4420", 00:31:27.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.757 "hdgst": false, 00:31:27.757 "ddgst": false 00:31:27.757 }, 00:31:27.757 "method": "bdev_nvme_attach_controller" 00:31:27.757 },{ 00:31:27.757 "params": { 00:31:27.757 "name": "Nvme1", 00:31:27.757 "trtype": "tcp", 00:31:27.757 "traddr": "10.0.0.2", 00:31:27.757 "adrfam": "ipv4", 00:31:27.757 "trsvcid": "4420", 00:31:27.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.757 "hdgst": false, 00:31:27.757 "ddgst": false 00:31:27.757 }, 00:31:27.757 "method": "bdev_nvme_attach_controller" 00:31:27.757 },{ 00:31:27.757 "params": { 00:31:27.757 "name": "Nvme2", 00:31:27.757 "trtype": "tcp", 00:31:27.757 "traddr": "10.0.0.2", 00:31:27.757 "adrfam": "ipv4", 00:31:27.757 "trsvcid": "4420", 00:31:27.757 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:27.757 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:27.757 "hdgst": false, 00:31:27.757 "ddgst": false 00:31:27.757 }, 00:31:27.757 "method": "bdev_nvme_attach_controller" 00:31:27.757 }' 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:27.757 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:28.052 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:28.052 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:28.052 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:28.052 17:10:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.317 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:28.317 ... 00:31:28.317 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:28.317 ... 00:31:28.317 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:28.317 ... 00:31:28.317 fio-3.35 00:31:28.317 Starting 24 threads 00:31:28.317 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.547 00:31:40.547 filename0: (groupid=0, jobs=1): err= 0: pid=1640463: Thu Jul 25 17:10:59 2024 00:31:40.547 read: IOPS=518, BW=2076KiB/s (2126kB/s)(20.3MiB/10033msec) 00:31:40.547 slat (nsec): min=5539, max=81160, avg=8849.35, stdev=6075.81 00:31:40.547 clat (usec): min=5928, max=64433, avg=30738.21, stdev=5749.10 00:31:40.548 lat (usec): min=5938, max=64439, avg=30747.06, stdev=5749.40 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[14877], 5.00th=[20055], 10.00th=[22152], 20.00th=[27132], 00:31:40.548 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:40.548 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33817], 95.00th=[39584], 00:31:40.548 | 99.00th=[45876], 99.50th=[50594], 99.90th=[57410], 99.95th=[64226], 00:31:40.548 | 99.99th=[64226] 00:31:40.548 bw ( KiB/s): min= 1888, max= 2256, per=4.42%, avg=2075.90, stdev=101.91, samples=20 00:31:40.548 iops : min= 472, max= 564, avg=518.90, stdev=25.52, samples=20 00:31:40.548 lat (msec) : 10=0.27%, 20=4.26%, 50=94.89%, 100=0.58% 00:31:40.548 cpu : usr=99.26%, sys=0.44%, ctx=23, majf=0, minf=53 00:31:40.548 IO depths : 1=2.2%, 2=6.4%, 4=18.6%, 8=61.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=5207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640464: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10004msec) 00:31:40.548 slat (usec): min=5, max=186, avg=18.17, stdev=13.35 00:31:40.548 clat (usec): min=23680, max=41823, avg=32108.31, stdev=1079.35 00:31:40.548 lat (usec): min=23692, max=41851, avg=32126.48, stdev=1079.07 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[30016], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:40.548 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.548 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:40.548 | 99.00th=[34341], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:31:40.548 | 99.99th=[41681] 00:31:40.548 bw ( KiB/s): min= 1916, max= 2052, per=4.22%, avg=1980.79, stdev=65.45, samples=19 00:31:40.548 iops : min= 479, max= 513, avg=495.00, stdev=16.47, samples=19 00:31:40.548 lat (msec) : 50=100.00% 00:31:40.548 cpu : usr=95.07%, sys=2.59%, ctx=175, majf=0, minf=46 00:31:40.548 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640465: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=469, BW=1880KiB/s (1925kB/s)(18.4MiB/10021msec) 00:31:40.548 slat (nsec): min=5539, max=75893, avg=12982.77, stdev=10161.74 00:31:40.548 clat (usec): min=16931, max=57273, avg=33956.60, stdev=6054.27 00:31:40.548 lat (usec): min=16938, max=57279, avg=33969.58, stdev=6054.56 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[18744], 5.00th=[24511], 10.00th=[28967], 20.00th=[31327], 00:31:40.548 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:40.548 | 70.00th=[33162], 80.00th=[39060], 90.00th=[42730], 95.00th=[46400], 00:31:40.548 | 99.00th=[51119], 99.50th=[52167], 99.90th=[57410], 99.95th=[57410], 00:31:40.548 | 99.99th=[57410] 00:31:40.548 bw ( KiB/s): min= 1664, max= 2052, per=4.00%, avg=1878.15, stdev=82.68, samples=20 00:31:40.548 iops : min= 416, max= 513, avg=469.50, stdev=20.73, samples=20 00:31:40.548 lat (msec) : 20=1.57%, 50=96.98%, 100=1.44% 00:31:40.548 cpu : usr=96.57%, sys=1.88%, ctx=48, majf=0, minf=58 00:31:40.548 IO depths : 1=2.1%, 2=4.4%, 4=13.8%, 8=67.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=91.6%, 8=4.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=4709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640466: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10004msec) 00:31:40.548 slat (nsec): min=5541, max=78626, avg=13917.15, stdev=10699.66 00:31:40.548 clat (usec): min=15999, max=55048, avg=33509.41, stdev=5636.11 00:31:40.548 lat (usec): min=16052, max=55073, avg=33523.33, stdev=5635.44 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[19268], 5.00th=[24249], 10.00th=[29754], 20.00th=[31327], 00:31:40.548 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:40.548 | 70.00th=[32900], 80.00th=[34341], 90.00th=[42206], 95.00th=[45876], 00:31:40.548 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54264], 99.95th=[54789], 00:31:40.548 | 99.99th=[54789] 00:31:40.548 bw ( KiB/s): min= 1824, max= 2024, per=4.06%, avg=1904.21, stdev=53.87, samples=19 00:31:40.548 iops : min= 456, max= 506, avg=476.05, stdev=13.47, samples=19 00:31:40.548 lat (msec) : 20=1.47%, 50=97.23%, 100=1.30% 00:31:40.548 cpu : usr=97.41%, sys=1.51%, ctx=78, majf=0, minf=69 00:31:40.548 IO depths : 1=1.0%, 2=2.0%, 4=11.9%, 8=71.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=91.2%, 8=5.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=4765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640467: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=464, BW=1859KiB/s (1904kB/s)(18.2MiB/10005msec) 00:31:40.548 slat (nsec): min=5378, max=97417, avg=15284.38, stdev=12117.90 00:31:40.548 clat (usec): min=6099, max=61725, avg=34326.52, stdev=6435.48 00:31:40.548 lat (usec): min=6105, max=61731, avg=34341.80, stdev=6435.24 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[19530], 5.00th=[24249], 10.00th=[29754], 20.00th=[31327], 00:31:40.548 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:40.548 | 70.00th=[33424], 80.00th=[40109], 90.00th=[44303], 95.00th=[46924], 00:31:40.548 | 99.00th=[51643], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:31:40.548 | 99.99th=[61604] 00:31:40.548 bw ( KiB/s): min= 1744, max= 2048, per=3.94%, avg=1851.16, stdev=67.81, samples=19 00:31:40.548 iops : min= 436, max= 512, avg=462.79, stdev=16.95, samples=19 00:31:40.548 lat (msec) : 10=0.09%, 20=1.10%, 50=96.62%, 100=2.19% 00:31:40.548 cpu : usr=98.49%, sys=0.98%, ctx=27, majf=0, minf=38 00:31:40.548 IO depths : 1=1.0%, 2=2.0%, 4=10.6%, 8=72.9%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=90.8%, 8=5.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=4650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640468: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10016msec) 00:31:40.548 slat (nsec): min=5550, max=73208, avg=13455.00, stdev=10393.62 00:31:40.548 clat (usec): min=16541, max=56002, avg=32162.21, stdev=3094.01 00:31:40.548 lat (usec): min=16549, max=56041, avg=32175.66, stdev=3094.62 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[20841], 5.00th=[28967], 10.00th=[31065], 20.00th=[31589], 00:31:40.548 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.548 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:31:40.548 | 99.00th=[43254], 99.50th=[43779], 99.90th=[55837], 99.95th=[55837], 00:31:40.548 | 99.99th=[55837] 00:31:40.548 bw ( KiB/s): min= 1916, max= 2048, per=4.22%, avg=1981.95, stdev=62.83, samples=19 00:31:40.548 iops : min= 479, max= 512, avg=495.37, stdev=15.59, samples=19 00:31:40.548 lat (msec) : 20=0.68%, 50=99.07%, 100=0.24% 00:31:40.548 cpu : usr=99.09%, sys=0.55%, ctx=43, majf=0, minf=71 00:31:40.548 IO depths : 1=4.0%, 2=9.8%, 4=23.4%, 8=54.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640470: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10004msec) 00:31:40.548 slat (nsec): min=5461, max=83172, avg=12544.57, stdev=9756.87 00:31:40.548 clat (usec): min=6533, max=55034, avg=32147.80, stdev=2175.07 00:31:40.548 lat (usec): min=6539, max=55053, avg=32160.34, stdev=2175.31 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[28967], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:31:40.548 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.548 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:40.548 | 99.00th=[35390], 99.50th=[41681], 99.90th=[54789], 99.95th=[54789], 00:31:40.548 | 99.99th=[54789] 00:31:40.548 bw ( KiB/s): min= 1788, max= 2052, per=4.20%, avg=1973.89, stdev=78.43, samples=19 00:31:40.548 iops : min= 447, max= 513, avg=493.47, stdev=19.61, samples=19 00:31:40.548 lat (msec) : 10=0.04%, 20=0.36%, 50=99.27%, 100=0.32% 00:31:40.548 cpu : usr=99.35%, sys=0.38%, ctx=10, majf=0, minf=54 00:31:40.548 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:40.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.548 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.548 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.548 filename0: (groupid=0, jobs=1): err= 0: pid=1640471: Thu Jul 25 17:10:59 2024 00:31:40.548 read: IOPS=505, BW=2020KiB/s (2069kB/s)(19.7MiB/10003msec) 00:31:40.548 slat (nsec): min=5551, max=82769, avg=12121.01, stdev=9608.31 00:31:40.548 clat (usec): min=6094, max=82087, avg=31584.65, stdev=5360.28 00:31:40.548 lat (usec): min=6100, max=82108, avg=31596.77, stdev=5360.57 00:31:40.548 clat percentiles (usec): 00:31:40.548 | 1.00th=[17433], 5.00th=[22152], 10.00th=[25560], 20.00th=[30802], 00:31:40.548 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:40.548 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[38536], 00:31:40.548 | 99.00th=[46400], 99.50th=[50594], 99.90th=[82314], 99.95th=[82314], 00:31:40.549 | 99.99th=[82314] 00:31:40.549 bw ( KiB/s): min= 1712, max= 2292, per=4.29%, avg=2015.37, stdev=131.37, samples=19 00:31:40.549 iops : min= 428, max= 573, avg=503.84, stdev=32.84, samples=19 00:31:40.549 lat (msec) : 10=0.20%, 20=2.97%, 50=96.20%, 100=0.63% 00:31:40.549 cpu : usr=99.25%, sys=0.45%, ctx=63, majf=0, minf=84 00:31:40.549 IO depths : 1=3.5%, 2=7.5%, 4=17.5%, 8=61.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=92.2%, 8=3.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=5052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640472: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10004msec) 00:31:40.549 slat (nsec): min=5491, max=79797, avg=17274.21, stdev=12776.32 00:31:40.549 clat (usec): min=7727, max=50308, avg=32093.71, stdev=2277.97 00:31:40.549 lat (usec): min=7733, max=50324, avg=32110.98, stdev=2278.57 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[22676], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:31:40.549 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.549 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:40.549 | 99.00th=[41157], 99.50th=[42206], 99.90th=[50070], 99.95th=[50070], 00:31:40.549 | 99.99th=[50070] 00:31:40.549 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1982.05, stdev=75.55, samples=19 00:31:40.549 iops : min= 448, max= 512, avg=495.47, stdev=18.85, samples=19 00:31:40.549 lat (msec) : 10=0.04%, 20=0.32%, 50=99.32%, 100=0.32% 00:31:40.549 cpu : usr=99.24%, sys=0.47%, ctx=32, majf=0, minf=60 00:31:40.549 IO depths : 1=3.9%, 2=9.6%, 4=23.2%, 8=54.3%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640473: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.5MiB/10022msec) 00:31:40.549 slat (nsec): min=5636, max=73056, avg=13708.77, stdev=9729.61 00:31:40.549 clat (usec): min=15434, max=55157, avg=32049.92, stdev=2859.50 00:31:40.549 lat (usec): min=15440, max=55165, avg=32063.63, stdev=2860.14 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[19268], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:31:40.549 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:40.549 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:40.549 | 99.00th=[41681], 99.50th=[45351], 99.90th=[53740], 99.95th=[53740], 00:31:40.549 | 99.99th=[55313] 00:31:40.549 bw ( KiB/s): min= 1916, max= 2064, per=4.23%, avg=1986.40, stdev=65.15, samples=20 00:31:40.549 iops : min= 479, max= 516, avg=496.45, stdev=16.30, samples=20 00:31:40.549 lat (msec) : 20=1.10%, 50=98.49%, 100=0.40% 00:31:40.549 cpu : usr=95.89%, sys=1.91%, ctx=37, majf=0, minf=58 00:31:40.549 IO depths : 1=5.1%, 2=11.1%, 4=24.1%, 8=52.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640474: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=437, BW=1750KiB/s (1792kB/s)(17.1MiB/10003msec) 00:31:40.549 slat (nsec): min=5550, max=79547, avg=14760.07, stdev=11368.39 00:31:40.549 clat (usec): min=11396, max=77667, avg=36478.34, stdev=6945.37 00:31:40.549 lat (usec): min=11402, max=77688, avg=36493.10, stdev=6944.87 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[20579], 5.00th=[27132], 10.00th=[30802], 20.00th=[31851], 00:31:40.549 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[38011], 00:31:40.549 | 70.00th=[41157], 80.00th=[43779], 90.00th=[46400], 95.00th=[47449], 00:31:40.549 | 99.00th=[51119], 99.50th=[52691], 99.90th=[77071], 99.95th=[77071], 00:31:40.549 | 99.99th=[78119] 00:31:40.549 bw ( KiB/s): min= 1368, max= 1976, per=3.70%, avg=1737.79, stdev=209.18, samples=19 00:31:40.549 iops : min= 342, max= 494, avg=434.42, stdev=52.33, samples=19 00:31:40.549 lat (msec) : 20=0.89%, 50=97.14%, 100=1.96% 00:31:40.549 cpu : usr=95.18%, sys=2.51%, ctx=33, majf=0, minf=54 00:31:40.549 IO depths : 1=0.3%, 2=0.7%, 4=10.7%, 8=74.0%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=91.4%, 8=5.1%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640475: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10022msec) 00:31:40.549 slat (usec): min=5, max=192, avg=12.87, stdev=10.07 00:31:40.549 clat (usec): min=18378, max=43063, avg=32103.19, stdev=1849.29 00:31:40.549 lat (usec): min=18411, max=43073, avg=32116.05, stdev=1849.35 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[22414], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:31:40.549 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.549 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:40.549 | 99.00th=[39060], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:31:40.549 | 99.99th=[43254] 00:31:40.549 bw ( KiB/s): min= 1916, max= 2048, per=4.24%, avg=1988.45, stdev=64.46, samples=20 00:31:40.549 iops : min= 479, max= 512, avg=497.00, stdev=16.02, samples=20 00:31:40.549 lat (msec) : 20=0.32%, 50=99.68% 00:31:40.549 cpu : usr=97.02%, sys=1.50%, ctx=97, majf=0, minf=46 00:31:40.549 IO depths : 1=5.0%, 2=11.2%, 4=24.8%, 8=51.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640476: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=498, BW=1993KiB/s (2040kB/s)(19.5MiB/10021msec) 00:31:40.549 slat (nsec): min=5576, max=71975, avg=14861.27, stdev=9897.67 00:31:40.549 clat (usec): min=9072, max=46359, avg=31987.85, stdev=1921.65 00:31:40.549 lat (usec): min=9088, max=46368, avg=32002.71, stdev=1921.69 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[20841], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:40.549 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.549 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:40.549 | 99.00th=[34341], 99.50th=[35390], 99.90th=[41157], 99.95th=[41157], 00:31:40.549 | 99.99th=[46400] 00:31:40.549 bw ( KiB/s): min= 1920, max= 2048, per=4.24%, avg=1989.65, stdev=64.66, samples=20 00:31:40.549 iops : min= 480, max= 512, avg=497.30, stdev=16.07, samples=20 00:31:40.549 lat (msec) : 10=0.14%, 20=0.72%, 50=99.14% 00:31:40.549 cpu : usr=98.62%, sys=0.95%, ctx=11, majf=0, minf=74 00:31:40.549 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640477: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10002msec) 00:31:40.549 slat (nsec): min=5585, max=90014, avg=21753.54, stdev=14598.33 00:31:40.549 clat (usec): min=13897, max=49837, avg=32048.40, stdev=1802.23 00:31:40.549 lat (usec): min=13903, max=49854, avg=32070.15, stdev=1802.38 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[29492], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:40.549 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:40.549 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:40.549 | 99.00th=[34341], 99.50th=[41681], 99.90th=[49546], 99.95th=[50070], 00:31:40.549 | 99.99th=[50070] 00:31:40.549 bw ( KiB/s): min= 1795, max= 2048, per=4.22%, avg=1980.53, stdev=77.68, samples=19 00:31:40.549 iops : min= 448, max= 512, avg=495.05, stdev=19.49, samples=19 00:31:40.549 lat (msec) : 20=0.32%, 50=99.68% 00:31:40.549 cpu : usr=99.10%, sys=0.53%, ctx=57, majf=0, minf=35 00:31:40.549 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.549 filename1: (groupid=0, jobs=1): err= 0: pid=1640478: Thu Jul 25 17:10:59 2024 00:31:40.549 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10003msec) 00:31:40.549 slat (nsec): min=5578, max=71389, avg=16270.97, stdev=11688.23 00:31:40.549 clat (usec): min=13607, max=53872, avg=32121.09, stdev=1422.54 00:31:40.549 lat (usec): min=13617, max=53883, avg=32137.36, stdev=1422.33 00:31:40.549 clat percentiles (usec): 00:31:40.549 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:40.549 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:40.549 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:31:40.549 | 99.00th=[35390], 99.50th=[38536], 99.90th=[41681], 99.95th=[51119], 00:31:40.549 | 99.99th=[53740] 00:31:40.549 bw ( KiB/s): min= 1916, max= 2048, per=4.22%, avg=1979.68, stdev=62.42, samples=19 00:31:40.549 iops : min= 479, max= 512, avg=494.84, stdev=15.52, samples=19 00:31:40.549 lat (msec) : 20=0.40%, 50=99.52%, 100=0.08% 00:31:40.549 cpu : usr=99.20%, sys=0.42%, ctx=75, majf=0, minf=69 00:31:40.549 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:40.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.549 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename1: (groupid=0, jobs=1): err= 0: pid=1640480: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=501, BW=2004KiB/s (2052kB/s)(19.6MiB/10022msec) 00:31:40.550 slat (nsec): min=5538, max=64293, avg=11663.91, stdev=8598.20 00:31:40.550 clat (usec): min=5471, max=54163, avg=31837.49, stdev=2825.30 00:31:40.550 lat (usec): min=5483, max=54202, avg=31849.16, stdev=2823.93 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[17695], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:40.550 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.550 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:31:40.550 | 99.00th=[34341], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:31:40.550 | 99.99th=[54264] 00:31:40.550 bw ( KiB/s): min= 1920, max= 2304, per=4.26%, avg=2001.90, stdev=94.86, samples=20 00:31:40.550 iops : min= 480, max= 576, avg=500.40, stdev=23.69, samples=20 00:31:40.550 lat (msec) : 10=0.64%, 20=0.96%, 50=98.37%, 100=0.04% 00:31:40.550 cpu : usr=99.10%, sys=0.57%, ctx=110, majf=0, minf=64 00:31:40.550 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640481: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.3MiB/10012msec) 00:31:40.550 slat (usec): min=5, max=104, avg=15.76, stdev=13.38 00:31:40.550 clat (usec): min=14714, max=61384, avg=34176.95, stdev=6252.26 00:31:40.550 lat (usec): min=14734, max=61390, avg=34192.71, stdev=6251.49 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[20841], 5.00th=[25035], 10.00th=[28967], 20.00th=[31327], 00:31:40.550 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:40.550 | 70.00th=[33424], 80.00th=[39060], 90.00th=[44303], 95.00th=[47449], 00:31:40.550 | 99.00th=[51119], 99.50th=[53740], 99.90th=[57410], 99.95th=[61604], 00:31:40.550 | 99.99th=[61604] 00:31:40.550 bw ( KiB/s): min= 1760, max= 1952, per=3.97%, avg=1865.47, stdev=56.10, samples=19 00:31:40.550 iops : min= 440, max= 488, avg=466.37, stdev=14.02, samples=19 00:31:40.550 lat (msec) : 20=0.62%, 50=97.37%, 100=2.01% 00:31:40.550 cpu : usr=98.93%, sys=0.70%, ctx=61, majf=0, minf=49 00:31:40.550 IO depths : 1=0.2%, 2=0.4%, 4=6.8%, 8=77.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=90.1%, 8=7.1%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=4674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640482: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10010msec) 00:31:40.550 slat (nsec): min=5571, max=83374, avg=12715.06, stdev=11649.21 00:31:40.550 clat (usec): min=23760, max=41937, avg=32182.57, stdev=1163.24 00:31:40.550 lat (usec): min=23768, max=41947, avg=32195.29, stdev=1162.27 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[30016], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:40.550 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.550 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:40.550 | 99.00th=[34341], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:31:40.550 | 99.99th=[41681] 00:31:40.550 bw ( KiB/s): min= 1916, max= 2048, per=4.22%, avg=1979.47, stdev=65.74, samples=19 00:31:40.550 iops : min= 479, max= 512, avg=494.79, stdev=16.36, samples=19 00:31:40.550 lat (msec) : 50=100.00% 00:31:40.550 cpu : usr=95.82%, sys=1.97%, ctx=138, majf=0, minf=51 00:31:40.550 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640483: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10003msec) 00:31:40.550 slat (nsec): min=5580, max=92383, avg=21556.65, stdev=14730.48 00:31:40.550 clat (usec): min=13948, max=50439, avg=32050.99, stdev=1816.02 00:31:40.550 lat (usec): min=13954, max=50456, avg=32072.55, stdev=1816.44 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[29492], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:40.550 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:40.550 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:40.550 | 99.00th=[34341], 99.50th=[41681], 99.90th=[50594], 99.95th=[50594], 00:31:40.550 | 99.99th=[50594] 00:31:40.550 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1980.37, stdev=78.08, samples=19 00:31:40.550 iops : min= 448, max= 512, avg=495.05, stdev=19.49, samples=19 00:31:40.550 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:31:40.550 cpu : usr=96.29%, sys=1.64%, ctx=58, majf=0, minf=60 00:31:40.550 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640484: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:31:40.550 slat (nsec): min=5388, max=92146, avg=13349.99, stdev=10316.92 00:31:40.550 clat (usec): min=5924, max=50355, avg=32211.30, stdev=1950.49 00:31:40.550 lat (usec): min=5932, max=50374, avg=32224.65, stdev=1950.81 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[29492], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:31:40.550 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.550 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:40.550 | 99.00th=[35914], 99.50th=[41681], 99.90th=[50070], 99.95th=[50070], 00:31:40.550 | 99.99th=[50594] 00:31:40.550 bw ( KiB/s): min= 1795, max= 2032, per=4.21%, avg=1976.37, stdev=49.43, samples=19 00:31:40.550 iops : min= 448, max= 508, avg=494.05, stdev=12.51, samples=19 00:31:40.550 lat (msec) : 10=0.08%, 20=0.24%, 50=99.35%, 100=0.32% 00:31:40.550 cpu : usr=99.15%, sys=0.53%, ctx=21, majf=0, minf=94 00:31:40.550 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=78.1%, 16=17.9%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=90.1%, 8=9.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640485: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10028msec) 00:31:40.550 slat (nsec): min=5377, max=77367, avg=14866.95, stdev=11065.83 00:31:40.550 clat (usec): min=7773, max=62960, avg=32365.91, stdev=4845.58 00:31:40.550 lat (usec): min=7781, max=62965, avg=32380.78, stdev=4845.88 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[16712], 5.00th=[23200], 10.00th=[30540], 20.00th=[31327], 00:31:40.550 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.550 | 70.00th=[32637], 80.00th=[33162], 90.00th=[36439], 95.00th=[41681], 00:31:40.550 | 99.00th=[46400], 99.50th=[50594], 99.90th=[53740], 99.95th=[63177], 00:31:40.550 | 99.99th=[63177] 00:31:40.550 bw ( KiB/s): min= 1884, max= 2144, per=4.20%, avg=1971.80, stdev=67.29, samples=20 00:31:40.550 iops : min= 471, max= 536, avg=492.95, stdev=16.82, samples=20 00:31:40.550 lat (msec) : 10=0.38%, 20=1.50%, 50=97.57%, 100=0.55% 00:31:40.550 cpu : usr=98.16%, sys=1.13%, ctx=41, majf=0, minf=42 00:31:40.550 IO depths : 1=1.1%, 2=2.3%, 4=10.6%, 8=73.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=4941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640486: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=507, BW=2029KiB/s (2077kB/s)(19.9MiB/10021msec) 00:31:40.550 slat (nsec): min=5555, max=67962, avg=10656.60, stdev=7460.93 00:31:40.550 clat (usec): min=11118, max=50716, avg=31469.04, stdev=4170.90 00:31:40.550 lat (usec): min=11130, max=50730, avg=31479.70, stdev=4171.56 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[17957], 5.00th=[21627], 10.00th=[27395], 20.00th=[31327], 00:31:40.550 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:40.550 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:31:40.550 | 99.00th=[45351], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:31:40.550 | 99.99th=[50594] 00:31:40.550 bw ( KiB/s): min= 1920, max= 2180, per=4.32%, avg=2026.10, stdev=80.57, samples=20 00:31:40.550 iops : min= 480, max= 545, avg=506.45, stdev=20.13, samples=20 00:31:40.550 lat (msec) : 20=3.31%, 50=96.34%, 100=0.35% 00:31:40.550 cpu : usr=99.18%, sys=0.50%, ctx=57, majf=0, minf=85 00:31:40.550 IO depths : 1=4.1%, 2=9.5%, 4=22.5%, 8=55.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.550 issued rwts: total=5082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.550 filename2: (groupid=0, jobs=1): err= 0: pid=1640487: Thu Jul 25 17:10:59 2024 00:31:40.550 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10014msec) 00:31:40.550 slat (nsec): min=5538, max=89951, avg=13541.08, stdev=11702.02 00:31:40.550 clat (usec): min=12621, max=60664, avg=33269.67, stdev=5128.53 00:31:40.550 lat (usec): min=12631, max=60675, avg=33283.22, stdev=5127.28 00:31:40.550 clat percentiles (usec): 00:31:40.550 | 1.00th=[20317], 5.00th=[24773], 10.00th=[30802], 20.00th=[31589], 00:31:40.550 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:40.550 | 70.00th=[32900], 80.00th=[33817], 90.00th=[41157], 95.00th=[43254], 00:31:40.551 | 99.00th=[51119], 99.50th=[52691], 99.90th=[60556], 99.95th=[60556], 00:31:40.551 | 99.99th=[60556] 00:31:40.551 bw ( KiB/s): min= 1772, max= 2016, per=4.09%, avg=1920.63, stdev=68.64, samples=19 00:31:40.551 iops : min= 443, max= 504, avg=480.16, stdev=17.16, samples=19 00:31:40.551 lat (msec) : 20=0.98%, 50=97.40%, 100=1.62% 00:31:40.551 cpu : usr=98.99%, sys=0.65%, ctx=64, majf=0, minf=52 00:31:40.551 IO depths : 1=0.7%, 2=1.6%, 4=8.3%, 8=75.5%, 16=13.8%, 32=0.0%, >=64=0.0% 00:31:40.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.551 complete : 0=0.0%, 4=90.4%, 8=5.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.551 issued rwts: total=4804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.551 filename2: (groupid=0, jobs=1): err= 0: pid=1640488: Thu Jul 25 17:10:59 2024 00:31:40.551 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10004msec) 00:31:40.551 slat (nsec): min=5550, max=88012, avg=20105.22, stdev=14636.48 00:31:40.551 clat (usec): min=11142, max=63038, avg=32813.32, stdev=4125.50 00:31:40.551 lat (usec): min=11149, max=63044, avg=32833.43, stdev=4124.13 00:31:40.551 clat percentiles (usec): 00:31:40.551 | 1.00th=[23200], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:31:40.551 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:40.551 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[42206], 00:31:40.551 | 99.00th=[49021], 99.50th=[51119], 99.90th=[53740], 99.95th=[54789], 00:31:40.551 | 99.99th=[63177] 00:31:40.551 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1940.11, stdev=81.42, samples=19 00:31:40.551 iops : min= 448, max= 512, avg=484.95, stdev=20.38, samples=19 00:31:40.551 lat (msec) : 20=0.45%, 50=98.85%, 100=0.70% 00:31:40.551 cpu : usr=96.90%, sys=1.61%, ctx=307, majf=0, minf=62 00:31:40.551 IO depths : 1=3.8%, 2=7.7%, 4=17.6%, 8=60.9%, 16=10.0%, 32=0.0%, >=64=0.0% 00:31:40.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.551 complete : 0=0.0%, 4=92.5%, 8=3.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.551 issued rwts: total=4855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:40.551 00:31:40.551 Run status group 0 (all jobs): 00:31:40.551 READ: bw=45.8MiB/s (48.1MB/s), 1750KiB/s-2076KiB/s (1792kB/s-2126kB/s), io=460MiB (482MB), run=10002-10033msec 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 bdev_null0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 [2024-07-25 17:10:59.657069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 bdev_null1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.551 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.552 { 00:31:40.552 "params": { 00:31:40.552 "name": "Nvme$subsystem", 00:31:40.552 "trtype": "$TEST_TRANSPORT", 00:31:40.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.552 "adrfam": "ipv4", 00:31:40.552 "trsvcid": "$NVMF_PORT", 00:31:40.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.552 "hdgst": ${hdgst:-false}, 00:31:40.552 "ddgst": ${ddgst:-false} 00:31:40.552 }, 00:31:40.552 "method": "bdev_nvme_attach_controller" 00:31:40.552 } 00:31:40.552 EOF 00:31:40.552 )") 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.552 { 00:31:40.552 "params": { 00:31:40.552 "name": "Nvme$subsystem", 00:31:40.552 "trtype": "$TEST_TRANSPORT", 00:31:40.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.552 "adrfam": "ipv4", 00:31:40.552 "trsvcid": "$NVMF_PORT", 00:31:40.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.552 "hdgst": ${hdgst:-false}, 00:31:40.552 "ddgst": ${ddgst:-false} 00:31:40.552 }, 00:31:40.552 "method": "bdev_nvme_attach_controller" 00:31:40.552 } 00:31:40.552 EOF 00:31:40.552 )") 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:40.552 "params": { 00:31:40.552 "name": "Nvme0", 00:31:40.552 "trtype": "tcp", 00:31:40.552 "traddr": "10.0.0.2", 00:31:40.552 "adrfam": "ipv4", 00:31:40.552 "trsvcid": "4420", 00:31:40.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.552 "hdgst": false, 00:31:40.552 "ddgst": false 00:31:40.552 }, 00:31:40.552 "method": "bdev_nvme_attach_controller" 00:31:40.552 },{ 00:31:40.552 "params": { 00:31:40.552 "name": "Nvme1", 00:31:40.552 "trtype": "tcp", 00:31:40.552 "traddr": "10.0.0.2", 00:31:40.552 "adrfam": "ipv4", 00:31:40.552 "trsvcid": "4420", 00:31:40.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.552 "hdgst": false, 00:31:40.552 "ddgst": false 00:31:40.552 }, 00:31:40.552 "method": "bdev_nvme_attach_controller" 00:31:40.552 }' 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.552 17:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.552 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:40.552 ... 00:31:40.552 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:40.552 ... 00:31:40.552 fio-3.35 00:31:40.552 Starting 4 threads 00:31:40.552 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.843 00:31:45.843 filename0: (groupid=0, jobs=1): err= 0: pid=1642773: Thu Jul 25 17:11:05 2024 00:31:45.843 read: IOPS=2017, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5001msec) 00:31:45.843 slat (nsec): min=5365, max=54110, avg=7224.65, stdev=2605.99 00:31:45.843 clat (usec): min=2052, max=6942, avg=3946.80, stdev=592.57 00:31:45.843 lat (usec): min=2057, max=6948, avg=3954.02, stdev=592.46 00:31:45.843 clat percentiles (usec): 00:31:45.843 | 1.00th=[ 2737], 5.00th=[ 3032], 10.00th=[ 3195], 20.00th=[ 3458], 00:31:45.843 | 30.00th=[ 3654], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 4047], 00:31:45.843 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5014], 00:31:45.843 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6456], 00:31:45.843 | 99.99th=[ 6915] 00:31:45.843 bw ( KiB/s): min=15824, max=16592, per=24.79%, avg=16152.89, stdev=249.94, samples=9 00:31:45.843 iops : min= 1978, max= 2074, avg=2019.11, stdev=31.24, samples=9 00:31:45.843 lat (msec) : 4=57.05%, 10=42.95% 00:31:45.843 cpu : usr=96.80%, sys=2.92%, ctx=9, majf=0, minf=36 00:31:45.843 IO depths : 1=0.1%, 2=0.9%, 4=68.9%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.843 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.843 issued rwts: total=10089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.843 filename0: (groupid=0, jobs=1): err= 0: pid=1642774: Thu Jul 25 17:11:05 2024 00:31:45.843 read: IOPS=2074, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5003msec) 00:31:45.843 slat (nsec): min=5356, max=52715, avg=7269.75, stdev=2547.08 00:31:45.843 clat (usec): min=1660, max=6497, avg=3837.54, stdev=596.00 00:31:45.843 lat (usec): min=1665, max=6509, avg=3844.81, stdev=595.97 00:31:45.843 clat percentiles (usec): 00:31:45.843 | 1.00th=[ 2507], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3326], 00:31:45.843 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3818], 60.00th=[ 3949], 00:31:45.843 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 4883], 00:31:45.843 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 6063], 99.95th=[ 6259], 00:31:45.843 | 99.99th=[ 6456] 00:31:45.843 bw ( KiB/s): min=16336, max=16848, per=25.47%, avg=16593.78, stdev=182.16, samples=9 00:31:45.843 iops : min= 2042, max= 2106, avg=2074.22, stdev=22.77, samples=9 00:31:45.843 lat (msec) : 2=0.05%, 4=63.55%, 10=36.40% 00:31:45.843 cpu : usr=96.50%, sys=3.18%, ctx=8, majf=0, minf=46 00:31:45.843 IO depths : 1=0.2%, 2=1.2%, 4=67.5%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.843 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.843 issued rwts: total=10378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.843 filename1: (groupid=0, jobs=1): err= 0: pid=1642775: Thu Jul 25 17:11:05 2024 00:31:45.843 read: IOPS=2000, BW=15.6MiB/s (16.4MB/s)(78.8MiB/5042msec) 00:31:45.843 slat (nsec): min=5356, max=58909, avg=7001.78, stdev=2072.83 00:31:45.843 clat (usec): min=1524, max=49103, avg=3960.40, stdev=1580.87 00:31:45.843 lat (usec): min=1529, max=49125, avg=3967.40, stdev=1580.97 00:31:45.843 clat percentiles (usec): 00:31:45.843 | 1.00th=[ 2540], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3359], 00:31:45.843 | 30.00th=[ 3556], 40.00th=[ 3720], 50.00th=[ 3851], 60.00th=[ 3982], 00:31:45.843 | 70.00th=[ 4178], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5145], 00:31:45.843 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[41681], 99.95th=[49021], 00:31:45.843 | 99.99th=[49021] 00:31:45.843 bw ( KiB/s): min=14624, max=16880, per=24.63%, avg=16044.20, stdev=623.06, samples=10 00:31:45.843 iops : min= 1828, max= 2110, avg=2005.50, stdev=77.90, samples=10 00:31:45.844 lat (msec) : 2=0.09%, 4=61.03%, 10=38.77%, 50=0.11% 00:31:45.844 cpu : usr=96.61%, sys=3.02%, ctx=8, majf=0, minf=72 00:31:45.844 IO depths : 1=0.2%, 2=1.5%, 4=68.9%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.844 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.844 issued rwts: total=10085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.844 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.844 filename1: (groupid=0, jobs=1): err= 0: pid=1642776: Thu Jul 25 17:11:05 2024 00:31:45.844 read: IOPS=2100, BW=16.4MiB/s (17.2MB/s)(82.1MiB/5001msec) 00:31:45.844 slat (nsec): min=5355, max=60719, avg=7270.34, stdev=2417.34 00:31:45.844 clat (usec): min=2031, max=6688, avg=3788.59, stdev=650.58 00:31:45.844 lat (usec): min=2037, max=6696, avg=3795.86, stdev=650.64 00:31:45.844 clat percentiles (usec): 00:31:45.844 | 1.00th=[ 2442], 5.00th=[ 2835], 10.00th=[ 2999], 20.00th=[ 3228], 00:31:45.844 | 30.00th=[ 3425], 40.00th=[ 3589], 50.00th=[ 3752], 60.00th=[ 3884], 00:31:45.844 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 4948], 00:31:45.844 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6325], 99.95th=[ 6390], 00:31:45.844 | 99.99th=[ 6652] 00:31:45.844 bw ( KiB/s): min=16624, max=17136, per=25.83%, avg=16830.33, stdev=167.06, samples=9 00:31:45.844 iops : min= 2078, max= 2142, avg=2103.78, stdev=20.87, samples=9 00:31:45.844 lat (msec) : 4=66.58%, 10=33.42% 00:31:45.844 cpu : usr=96.94%, sys=2.68%, ctx=10, majf=0, minf=28 00:31:45.844 IO depths : 1=0.2%, 2=1.1%, 4=69.9%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.844 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.844 issued rwts: total=10506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.844 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.844 00:31:45.844 Run status group 0 (all jobs): 00:31:45.844 READ: bw=63.6MiB/s (66.7MB/s), 15.6MiB/s-16.4MiB/s (16.4MB/s-17.2MB/s), io=321MiB (336MB), run=5001-5042msec 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.844 00:31:45.844 real 0m24.361s 00:31:45.844 user 5m15.919s 00:31:45.844 sys 0m4.772s 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.844 17:11:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.844 ************************************ 00:31:45.844 END TEST fio_dif_rand_params 00:31:45.844 ************************************ 00:31:45.844 17:11:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:45.844 17:11:06 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:45.844 17:11:06 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:45.844 17:11:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:46.105 ************************************ 00:31:46.105 START TEST fio_dif_digest 00:31:46.105 ************************************ 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:46.105 bdev_null0 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:46.105 [2024-07-25 17:11:06.191746] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:46.105 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:46.106 { 00:31:46.106 "params": { 00:31:46.106 "name": "Nvme$subsystem", 00:31:46.106 "trtype": "$TEST_TRANSPORT", 00:31:46.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.106 "adrfam": "ipv4", 00:31:46.106 "trsvcid": "$NVMF_PORT", 00:31:46.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.106 "hdgst": ${hdgst:-false}, 00:31:46.106 "ddgst": ${ddgst:-false} 00:31:46.106 }, 00:31:46.106 "method": "bdev_nvme_attach_controller" 00:31:46.106 } 00:31:46.106 EOF 00:31:46.106 )") 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:46.106 "params": { 00:31:46.106 "name": "Nvme0", 00:31:46.106 "trtype": "tcp", 00:31:46.106 "traddr": "10.0.0.2", 00:31:46.106 "adrfam": "ipv4", 00:31:46.106 "trsvcid": "4420", 00:31:46.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:46.106 "hdgst": true, 00:31:46.106 "ddgst": true 00:31:46.106 }, 00:31:46.106 "method": "bdev_nvme_attach_controller" 00:31:46.106 }' 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:46.106 17:11:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.366 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:46.366 ... 00:31:46.366 fio-3.35 00:31:46.366 Starting 3 threads 00:31:46.627 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.920 00:31:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=1644153: Thu Jul 25 17:11:17 2024 00:31:58.921 read: IOPS=121, BW=15.1MiB/s (15.9MB/s)(152MiB/10048msec) 00:31:58.921 slat (nsec): min=5748, max=47162, avg=7959.95, stdev=1789.06 00:31:58.921 clat (msec): min=8, max=138, avg=24.74, stdev=20.77 00:31:58.921 lat (msec): min=8, max=138, avg=24.75, stdev=20.77 00:31:58.921 clat percentiles (msec): 00:31:58.921 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:31:58.921 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:31:58.921 | 70.00th=[ 17], 80.00th=[ 54], 90.00th=[ 56], 95.00th=[ 57], 00:31:58.921 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 138], 99.95th=[ 140], 00:31:58.921 | 99.99th=[ 140] 00:31:58.921 bw ( KiB/s): min=11776, max=23808, per=34.21%, avg=15539.20, stdev=3486.53, samples=20 00:31:58.921 iops : min= 92, max= 186, avg=121.40, stdev=27.24, samples=20 00:31:58.921 lat (msec) : 10=7.32%, 20=65.95%, 100=26.40%, 250=0.33% 00:31:58.921 cpu : usr=96.97%, sys=2.79%, ctx=15, majf=0, minf=96 00:31:58.921 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.921 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=1644155: Thu Jul 25 17:11:17 2024 00:31:58.921 read: IOPS=116, BW=14.5MiB/s (15.2MB/s)(146MiB/10051msec) 00:31:58.921 slat (nsec): min=5599, max=34589, avg=7798.46, stdev=1747.50 00:31:58.921 clat (usec): min=9500, max=97857, avg=25760.79, stdev=19411.16 00:31:58.921 lat (usec): min=9506, max=97864, avg=25768.59, stdev=19411.03 00:31:58.921 clat percentiles (usec): 00:31:58.921 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11731], 20.00th=[12780], 00:31:58.921 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15008], 60.00th=[15926], 00:31:58.921 | 70.00th=[17433], 80.00th=[54264], 90.00th=[55837], 95.00th=[56361], 00:31:58.921 | 99.00th=[58983], 99.50th=[95945], 99.90th=[96994], 99.95th=[98042], 00:31:58.921 | 99.99th=[98042] 00:31:58.921 bw ( KiB/s): min= 8960, max=19968, per=32.85%, avg=14924.80, stdev=2646.26, samples=20 00:31:58.921 iops : min= 70, max= 156, avg=116.60, stdev=20.67, samples=20 00:31:58.921 lat (msec) : 10=0.68%, 20=71.40%, 50=0.09%, 100=27.83% 00:31:58.921 cpu : usr=96.67%, sys=3.08%, ctx=16, majf=0, minf=147 00:31:58.921 IO depths : 1=10.3%, 2=89.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.921 issued rwts: total=1168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=1644156: Thu Jul 25 17:11:17 2024 00:31:58.921 read: IOPS=117, BW=14.7MiB/s (15.4MB/s)(148MiB/10050msec) 00:31:58.921 slat (nsec): min=5722, max=31389, avg=6612.03, stdev=1328.82 00:31:58.921 clat (usec): min=7852, max=96563, avg=25438.62, stdev=19794.62 00:31:58.921 lat (usec): min=7858, max=96570, avg=25445.23, stdev=19794.58 00:31:58.921 clat percentiles (usec): 00:31:58.921 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11994], 00:31:58.921 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14615], 60.00th=[15664], 00:31:58.921 | 70.00th=[17957], 80.00th=[53740], 90.00th=[55313], 95.00th=[56361], 00:31:58.921 | 99.00th=[93848], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:31:58.921 | 99.99th=[96994] 00:31:58.921 bw ( KiB/s): min= 9984, max=19712, per=33.28%, avg=15116.80, stdev=2482.67, samples=20 00:31:58.921 iops : min= 78, max= 154, avg=118.10, stdev=19.40, samples=20 00:31:58.921 lat (msec) : 10=6.42%, 20=64.92%, 50=0.42%, 100=28.23% 00:31:58.921 cpu : usr=96.80%, sys=2.95%, ctx=17, majf=0, minf=169 00:31:58.921 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.921 issued rwts: total=1183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:58.921 00:31:58.921 Run status group 0 (all jobs): 00:31:58.921 READ: bw=44.4MiB/s (46.5MB/s), 14.5MiB/s-15.1MiB/s (15.2MB/s-15.9MB/s), io=446MiB (468MB), run=10048-10051msec 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.921 00:31:58.921 real 0m11.116s 00:31:58.921 user 0m45.033s 00:31:58.921 sys 0m1.211s 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.921 17:11:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:58.921 ************************************ 00:31:58.921 END TEST fio_dif_digest 00:31:58.921 ************************************ 00:31:58.921 17:11:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:58.921 17:11:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.921 rmmod nvme_tcp 00:31:58.921 rmmod nvme_fabrics 00:31:58.921 rmmod nvme_keyring 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1633811 ']' 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1633811 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1633811 ']' 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1633811 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1633811 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1633811' 00:31:58.921 killing process with pid 1633811 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1633811 00:31:58.921 17:11:17 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1633811 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:58.921 17:11:17 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:00.836 Waiting for block devices as requested 00:32:00.836 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:00.836 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:00.836 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:01.097 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:01.097 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:01.097 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:01.358 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:01.358 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:01.358 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:01.620 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:01.620 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:01.620 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:01.881 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:01.881 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:01.881 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:01.881 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:02.142 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:02.403 17:11:22 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:02.403 17:11:22 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:02.403 17:11:22 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:02.403 17:11:22 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:02.403 17:11:22 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.403 17:11:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:02.403 17:11:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.318 17:11:24 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:04.318 00:32:04.318 real 1m16.419s 00:32:04.318 user 8m0.534s 00:32:04.318 sys 0m19.606s 00:32:04.318 17:11:24 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.318 17:11:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:04.318 ************************************ 00:32:04.318 END TEST nvmf_dif 00:32:04.318 ************************************ 00:32:04.580 17:11:24 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:04.580 17:11:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:04.580 17:11:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.580 17:11:24 -- common/autotest_common.sh@10 -- # set +x 00:32:04.580 ************************************ 00:32:04.580 START TEST nvmf_abort_qd_sizes 00:32:04.580 ************************************ 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:04.580 * Looking for test storage... 00:32:04.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.580 17:11:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:04.581 17:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:12.726 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:12.726 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.726 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:12.727 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:12.727 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:12.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:32:12.727 00:32:12.727 --- 10.0.0.2 ping statistics --- 00:32:12.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.727 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:32:12.727 00:32:12.727 --- 10.0.0.1 ping statistics --- 00:32:12.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.727 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:12.727 17:11:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:15.275 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:15.275 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1653411 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1653411 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1653411 ']' 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.536 17:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:15.536 [2024-07-25 17:11:35.666883] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:32:15.536 [2024-07-25 17:11:35.666937] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.536 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.536 [2024-07-25 17:11:35.734631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:15.536 [2024-07-25 17:11:35.804599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.536 [2024-07-25 17:11:35.804636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.536 [2024-07-25 17:11:35.804644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.536 [2024-07-25 17:11:35.804650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.536 [2024-07-25 17:11:35.804657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.536 [2024-07-25 17:11:35.804798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.536 [2024-07-25 17:11:35.804931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.536 [2024-07-25 17:11:35.805087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.536 [2024-07-25 17:11:35.805087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:16.481 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:16.481 ************************************ 00:32:16.481 START TEST spdk_target_abort 00:32:16.481 ************************************ 00:32:16.481 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:32:16.481 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:16.481 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:16.481 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.481 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.743 spdk_targetn1 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.743 [2024-07-25 17:11:36.837253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.743 [2024-07-25 17:11:36.877520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.743 17:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.743 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.007 [2024-07-25 17:11:37.068934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:136 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:17.007 [2024-07-25 17:11:37.068960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:32:17.007 [2024-07-25 17:11:37.069453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:160 len:8 PRP1 0x2000078be000 PRP2 0x0 00:32:17.007 [2024-07-25 17:11:37.069463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0016 p:1 m:0 dnr:0 00:32:17.007 [2024-07-25 17:11:37.088711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:480 len:8 PRP1 0x2000078be000 PRP2 0x0 00:32:17.007 [2024-07-25 17:11:37.088726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:32:17.007 [2024-07-25 17:11:37.113695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:976 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:17.007 [2024-07-25 17:11:37.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:007d p:1 m:0 dnr:0 00:32:17.007 [2024-07-25 17:11:37.178875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2640 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:17.007 [2024-07-25 17:11:37.178892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:20.368 Initializing NVMe Controllers 00:32:20.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:20.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:20.368 Initialization complete. Launching workers. 00:32:20.368 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9338, failed: 5 00:32:20.368 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2802, failed to submit 6541 00:32:20.368 success 848, unsuccess 1954, failed 0 00:32:20.368 17:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:20.368 17:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:20.368 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.368 [2024-07-25 17:11:40.234426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:664 len:8 PRP1 0x200007c56000 PRP2 0x0 00:32:20.368 [2024-07-25 17:11:40.234469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:32:20.368 [2024-07-25 17:11:40.250397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:1008 len:8 PRP1 0x200007c52000 PRP2 0x0 00:32:20.368 [2024-07-25 17:11:40.250421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:32:20.368 [2024-07-25 17:11:40.386499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:4288 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:32:20.368 [2024-07-25 17:11:40.386525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:001a p:1 m:0 dnr:0 00:32:23.670 Initializing NVMe Controllers 00:32:23.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:23.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:23.670 Initialization complete. Launching workers. 00:32:23.670 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8599, failed: 3 00:32:23.670 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 7389 00:32:23.670 success 358, unsuccess 855, failed 0 00:32:23.670 17:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:23.670 17:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:23.670 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.584 [2024-07-25 17:11:45.677673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:165 nsid:1 lba:224032 len:8 PRP1 0x200007922000 PRP2 0x0 00:32:25.584 [2024-07-25 17:11:45.677705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:165 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:32:26.526 Initializing NVMe Controllers 00:32:26.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:26.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:26.526 Initialization complete. Launching workers. 00:32:26.526 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39625, failed: 1 00:32:26.526 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2598, failed to submit 37028 00:32:26.526 success 682, unsuccess 1916, failed 0 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.526 17:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1653411 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1653411 ']' 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1653411 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1653411 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1653411' 00:32:28.443 killing process with pid 1653411 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1653411 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1653411 00:32:28.443 00:32:28.443 real 0m12.110s 00:32:28.443 user 0m48.897s 00:32:28.443 sys 0m2.087s 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.443 ************************************ 00:32:28.443 END TEST spdk_target_abort 00:32:28.443 ************************************ 00:32:28.443 17:11:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:28.443 17:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:28.443 17:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:28.443 17:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:28.443 ************************************ 00:32:28.443 START TEST kernel_target_abort 00:32:28.443 ************************************ 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:28.443 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:28.704 17:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:32.012 Waiting for block devices as requested 00:32:32.012 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:32.012 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:32.012 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:32.012 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:32.012 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:32.012 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:32.012 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:32.273 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:32.273 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:32.534 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:32.534 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:32.534 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:32.534 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:32.795 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:32.795 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:32.795 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:32.795 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:33.056 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:33.331 No valid GPT data, bailing 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:33.331 00:32:33.331 Discovery Log Number of Records 2, Generation counter 2 00:32:33.331 =====Discovery Log Entry 0====== 00:32:33.331 trtype: tcp 00:32:33.331 adrfam: ipv4 00:32:33.331 subtype: current discovery subsystem 00:32:33.331 treq: not specified, sq flow control disable supported 00:32:33.331 portid: 1 00:32:33.331 trsvcid: 4420 00:32:33.331 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:33.331 traddr: 10.0.0.1 00:32:33.331 eflags: none 00:32:33.331 sectype: none 00:32:33.331 =====Discovery Log Entry 1====== 00:32:33.331 trtype: tcp 00:32:33.331 adrfam: ipv4 00:32:33.331 subtype: nvme subsystem 00:32:33.331 treq: not specified, sq flow control disable supported 00:32:33.331 portid: 1 00:32:33.331 trsvcid: 4420 00:32:33.331 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:33.331 traddr: 10.0.0.1 00:32:33.331 eflags: none 00:32:33.331 sectype: none 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:33.331 17:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:33.331 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.639 Initializing NVMe Controllers 00:32:36.639 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:36.639 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:36.639 Initialization complete. Launching workers. 00:32:36.639 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37502, failed: 0 00:32:36.639 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37502, failed to submit 0 00:32:36.639 success 0, unsuccess 37502, failed 0 00:32:36.639 17:11:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:36.639 17:11:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:36.639 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.012 Initializing NVMe Controllers 00:32:40.012 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:40.012 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:40.012 Initialization complete. Launching workers. 00:32:40.012 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76078, failed: 0 00:32:40.012 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19150, failed to submit 56928 00:32:40.012 success 0, unsuccess 19150, failed 0 00:32:40.012 17:11:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:40.012 17:11:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:40.012 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.602 Initializing NVMe Controllers 00:32:42.602 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:42.602 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:42.602 Initialization complete. Launching workers. 00:32:42.602 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73415, failed: 0 00:32:42.602 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18342, failed to submit 55073 00:32:42.602 success 0, unsuccess 18342, failed 0 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:42.602 17:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:45.910 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:45.910 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:46.172 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:48.088 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:48.350 00:32:48.350 real 0m19.708s 00:32:48.350 user 0m6.875s 00:32:48.350 sys 0m6.503s 00:32:48.350 17:12:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.350 17:12:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.350 ************************************ 00:32:48.350 END TEST kernel_target_abort 00:32:48.350 ************************************ 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:48.350 rmmod nvme_tcp 00:32:48.350 rmmod nvme_fabrics 00:32:48.350 rmmod nvme_keyring 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1653411 ']' 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1653411 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1653411 ']' 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1653411 00:32:48.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1653411) - No such process 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1653411 is not found' 00:32:48.350 Process with pid 1653411 is not found 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:48.350 17:12:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:51.660 Waiting for block devices as requested 00:32:51.660 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:51.922 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:51.922 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:51.922 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:52.184 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:52.184 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:52.184 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:52.445 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:52.445 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:52.707 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:52.707 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:52.707 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:52.707 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:52.968 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:52.968 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:52.968 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:52.968 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:53.241 17:12:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.794 17:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:55.794 00:32:55.794 real 0m50.922s 00:32:55.794 user 1m0.823s 00:32:55.794 sys 0m19.249s 00:32:55.794 17:12:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:55.794 17:12:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:55.794 ************************************ 00:32:55.794 END TEST nvmf_abort_qd_sizes 00:32:55.794 ************************************ 00:32:55.794 17:12:15 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:55.794 17:12:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:55.794 17:12:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:55.794 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:32:55.794 ************************************ 00:32:55.794 START TEST keyring_file 00:32:55.794 ************************************ 00:32:55.794 17:12:15 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:55.794 * Looking for test storage... 00:32:55.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:55.794 17:12:15 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:55.794 17:12:15 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.794 17:12:15 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.794 17:12:15 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.794 17:12:15 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.794 17:12:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.794 17:12:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.794 17:12:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.794 17:12:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:55.794 17:12:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:55.794 17:12:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UHKGLm5bAX 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UHKGLm5bAX 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UHKGLm5bAX 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.UHKGLm5bAX 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oCMncvaRtR 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:55.795 17:12:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oCMncvaRtR 00:32:55.795 17:12:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oCMncvaRtR 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.oCMncvaRtR 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=1664357 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1664357 00:32:55.795 17:12:15 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:55.795 17:12:15 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1664357 ']' 00:32:55.795 17:12:15 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.795 17:12:15 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.795 17:12:15 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.795 17:12:15 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.795 17:12:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:55.795 [2024-07-25 17:12:15.953146] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:32:55.795 [2024-07-25 17:12:15.953230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664357 ] 00:32:55.795 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.795 [2024-07-25 17:12:16.019523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.056 [2024-07-25 17:12:16.094931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:56.629 17:12:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.629 [2024-07-25 17:12:16.731635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.629 null0 00:32:56.629 [2024-07-25 17:12:16.763684] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:56.629 [2024-07-25 17:12:16.763914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:56.629 [2024-07-25 17:12:16.771693] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.629 17:12:16 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.629 [2024-07-25 17:12:16.783719] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:56.629 request: 00:32:56.629 { 00:32:56.629 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:56.629 "secure_channel": false, 00:32:56.629 "listen_address": { 00:32:56.629 "trtype": "tcp", 00:32:56.629 "traddr": "127.0.0.1", 00:32:56.629 "trsvcid": "4420" 00:32:56.629 }, 00:32:56.629 "method": "nvmf_subsystem_add_listener", 00:32:56.629 "req_id": 1 00:32:56.629 } 00:32:56.629 Got JSON-RPC error response 00:32:56.629 response: 00:32:56.629 { 00:32:56.629 "code": -32602, 00:32:56.629 "message": "Invalid parameters" 00:32:56.629 } 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:56.629 17:12:16 keyring_file -- keyring/file.sh@46 -- # bperfpid=1664500 00:32:56.629 17:12:16 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1664500 /var/tmp/bperf.sock 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1664500 ']' 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:56.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:56.629 17:12:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.629 17:12:16 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:56.629 [2024-07-25 17:12:16.837382] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:32:56.629 [2024-07-25 17:12:16.837429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664500 ] 00:32:56.629 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.890 [2024-07-25 17:12:16.911981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.890 [2024-07-25 17:12:16.976188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.463 17:12:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:57.463 17:12:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:57.463 17:12:17 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:32:57.463 17:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:32:57.724 17:12:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oCMncvaRtR 00:32:57.724 17:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oCMncvaRtR 00:32:57.724 17:12:17 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:57.724 17:12:17 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:57.724 17:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.724 17:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.724 17:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.986 17:12:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.UHKGLm5bAX == \/\t\m\p\/\t\m\p\.\U\H\K\G\L\m\5\b\A\X ]] 00:32:57.986 17:12:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:57.986 17:12:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:57.986 17:12:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.oCMncvaRtR == \/\t\m\p\/\t\m\p\.\o\C\M\n\c\v\a\R\t\R ]] 00:32:57.986 17:12:18 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.986 17:12:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.247 17:12:18 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:58.247 17:12:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:58.247 17:12:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:58.247 17:12:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.247 17:12:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.247 17:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.247 17:12:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:58.508 17:12:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:58.508 17:12:18 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:58.508 17:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:58.508 [2024-07-25 17:12:18.673135] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:58.508 nvme0n1 00:32:58.508 17:12:18 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:58.508 17:12:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:58.508 17:12:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.508 17:12:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.508 17:12:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.508 17:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.769 17:12:18 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:58.769 17:12:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:58.769 17:12:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:58.769 17:12:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.769 17:12:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.769 17:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.769 17:12:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:59.030 17:12:19 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:59.030 17:12:19 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:59.030 Running I/O for 1 seconds... 00:33:00.028 00:33:00.028 Latency(us) 00:33:00.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.028 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:00.028 nvme0n1 : 1.02 5613.29 21.93 0.00 0.00 22530.43 4942.51 29491.20 00:33:00.028 =================================================================================================================== 00:33:00.028 Total : 5613.29 21.93 0.00 0.00 22530.43 4942.51 29491.20 00:33:00.028 0 00:33:00.028 17:12:20 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:00.028 17:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:00.289 17:12:20 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.289 17:12:20 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:00.289 17:12:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:00.289 17:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.550 17:12:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:00.550 17:12:20 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:00.550 17:12:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.550 17:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.810 [2024-07-25 17:12:20.853928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:00.810 [2024-07-25 17:12:20.854707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ac170 (107): Transport endpoint is not connected 00:33:00.810 [2024-07-25 17:12:20.855703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ac170 (9): Bad file descriptor 00:33:00.810 [2024-07-25 17:12:20.856704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.810 [2024-07-25 17:12:20.856711] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:00.810 [2024-07-25 17:12:20.856716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.810 request: 00:33:00.810 { 00:33:00.810 "name": "nvme0", 00:33:00.810 "trtype": "tcp", 00:33:00.810 "traddr": "127.0.0.1", 00:33:00.810 "adrfam": "ipv4", 00:33:00.810 "trsvcid": "4420", 00:33:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:00.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:00.810 "prchk_reftag": false, 00:33:00.810 "prchk_guard": false, 00:33:00.810 "hdgst": false, 00:33:00.810 "ddgst": false, 00:33:00.810 "psk": "key1", 00:33:00.810 "method": "bdev_nvme_attach_controller", 00:33:00.810 "req_id": 1 00:33:00.810 } 00:33:00.810 Got JSON-RPC error response 00:33:00.810 response: 00:33:00.810 { 00:33:00.810 "code": -5, 00:33:00.810 "message": "Input/output error" 00:33:00.810 } 00:33:00.810 17:12:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:00.810 17:12:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:00.810 17:12:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:00.810 17:12:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:00.811 17:12:20 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:00.811 17:12:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.811 17:12:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.811 17:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.811 17:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.811 17:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.811 17:12:21 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:00.811 17:12:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:00.811 17:12:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:00.811 17:12:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.811 17:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.811 17:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:00.811 17:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.072 17:12:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:01.072 17:12:21 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:01.072 17:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:01.333 17:12:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:01.333 17:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:01.333 17:12:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:01.333 17:12:21 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:01.333 17:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.595 17:12:21 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:01.595 17:12:21 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.UHKGLm5bAX 00:33:01.595 17:12:21 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:33:01.595 17:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:33:01.595 [2024-07-25 17:12:21.817452] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UHKGLm5bAX': 0100660 00:33:01.595 [2024-07-25 17:12:21.817471] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:01.595 request: 00:33:01.595 { 00:33:01.595 "name": "key0", 00:33:01.595 "path": "/tmp/tmp.UHKGLm5bAX", 00:33:01.595 "method": "keyring_file_add_key", 00:33:01.595 "req_id": 1 00:33:01.595 } 00:33:01.595 Got JSON-RPC error response 00:33:01.595 response: 00:33:01.595 { 00:33:01.595 "code": -1, 00:33:01.595 "message": "Operation not permitted" 00:33:01.595 } 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:01.595 17:12:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:01.595 17:12:21 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.UHKGLm5bAX 00:33:01.595 17:12:21 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:33:01.595 17:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UHKGLm5bAX 00:33:01.856 17:12:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.UHKGLm5bAX 00:33:01.856 17:12:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:01.856 17:12:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:01.856 17:12:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.856 17:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.856 17:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:01.856 17:12:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.119 17:12:22 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:02.119 17:12:22 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.119 17:12:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.119 [2024-07-25 17:12:22.294666] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.UHKGLm5bAX': No such file or directory 00:33:02.119 [2024-07-25 17:12:22.294679] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:02.119 [2024-07-25 17:12:22.294695] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:02.119 [2024-07-25 17:12:22.294700] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:02.119 [2024-07-25 17:12:22.294706] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:02.119 request: 00:33:02.119 { 00:33:02.119 "name": "nvme0", 00:33:02.119 "trtype": "tcp", 00:33:02.119 "traddr": "127.0.0.1", 00:33:02.119 "adrfam": "ipv4", 00:33:02.119 "trsvcid": "4420", 00:33:02.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:02.119 "prchk_reftag": false, 00:33:02.119 "prchk_guard": false, 00:33:02.119 "hdgst": false, 00:33:02.119 "ddgst": false, 00:33:02.119 "psk": "key0", 00:33:02.119 "method": "bdev_nvme_attach_controller", 00:33:02.119 "req_id": 1 00:33:02.119 } 00:33:02.119 Got JSON-RPC error response 00:33:02.119 response: 00:33:02.119 { 00:33:02.119 "code": -19, 00:33:02.119 "message": "No such device" 00:33:02.119 } 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:02.119 17:12:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:02.119 17:12:22 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:02.119 17:12:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:02.379 17:12:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:02.379 17:12:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:02.379 17:12:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cAvB73uKZN 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:02.380 17:12:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:02.380 17:12:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.380 17:12:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:02.380 17:12:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:02.380 17:12:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:02.380 17:12:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cAvB73uKZN 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cAvB73uKZN 00:33:02.380 17:12:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.cAvB73uKZN 00:33:02.380 17:12:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cAvB73uKZN 00:33:02.380 17:12:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cAvB73uKZN 00:33:02.641 17:12:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.641 17:12:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.641 nvme0n1 00:33:02.641 17:12:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:02.641 17:12:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:02.641 17:12:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:02.641 17:12:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.641 17:12:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:02.641 17:12:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.903 17:12:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:02.903 17:12:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:02.903 17:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:03.164 17:12:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:03.164 17:12:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.164 17:12:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:03.164 17:12:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.164 17:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.426 17:12:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:03.426 17:12:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:03.426 17:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:03.687 17:12:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:03.687 17:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.687 17:12:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:03.687 17:12:23 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:03.687 17:12:23 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cAvB73uKZN 00:33:03.687 17:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cAvB73uKZN 00:33:03.948 17:12:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oCMncvaRtR 00:33:03.948 17:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oCMncvaRtR 00:33:03.948 17:12:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.948 17:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.210 nvme0n1 00:33:04.210 17:12:24 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:04.210 17:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:04.472 17:12:24 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:04.472 "subsystems": [ 00:33:04.472 { 00:33:04.472 "subsystem": "keyring", 00:33:04.472 "config": [ 00:33:04.472 { 00:33:04.472 "method": "keyring_file_add_key", 00:33:04.472 "params": { 00:33:04.472 "name": "key0", 00:33:04.472 "path": "/tmp/tmp.cAvB73uKZN" 00:33:04.472 } 00:33:04.472 }, 00:33:04.472 { 00:33:04.472 "method": "keyring_file_add_key", 00:33:04.472 "params": { 00:33:04.472 "name": "key1", 00:33:04.472 "path": "/tmp/tmp.oCMncvaRtR" 00:33:04.472 } 00:33:04.472 } 00:33:04.472 ] 00:33:04.472 }, 00:33:04.472 { 00:33:04.472 "subsystem": "iobuf", 00:33:04.472 "config": [ 00:33:04.472 { 00:33:04.472 "method": "iobuf_set_options", 00:33:04.472 "params": { 00:33:04.472 "small_pool_count": 8192, 00:33:04.472 "large_pool_count": 1024, 00:33:04.472 "small_bufsize": 8192, 00:33:04.472 "large_bufsize": 135168 00:33:04.472 } 00:33:04.472 } 00:33:04.472 ] 00:33:04.472 }, 00:33:04.472 { 00:33:04.472 "subsystem": "sock", 00:33:04.472 "config": [ 00:33:04.472 { 00:33:04.472 "method": "sock_set_default_impl", 00:33:04.472 "params": { 00:33:04.472 "impl_name": "posix" 00:33:04.472 } 00:33:04.472 }, 00:33:04.472 { 00:33:04.472 "method": "sock_impl_set_options", 00:33:04.472 "params": { 00:33:04.472 "impl_name": "ssl", 00:33:04.472 "recv_buf_size": 4096, 00:33:04.472 "send_buf_size": 4096, 00:33:04.472 "enable_recv_pipe": true, 00:33:04.472 "enable_quickack": false, 00:33:04.472 "enable_placement_id": 0, 00:33:04.472 "enable_zerocopy_send_server": true, 00:33:04.472 "enable_zerocopy_send_client": false, 00:33:04.472 "zerocopy_threshold": 0, 00:33:04.472 "tls_version": 0, 00:33:04.472 "enable_ktls": false 00:33:04.472 } 00:33:04.472 }, 00:33:04.472 { 00:33:04.472 "method": "sock_impl_set_options", 00:33:04.472 "params": { 00:33:04.472 "impl_name": "posix", 00:33:04.472 "recv_buf_size": 2097152, 00:33:04.472 "send_buf_size": 2097152, 00:33:04.472 "enable_recv_pipe": true, 00:33:04.472 "enable_quickack": false, 00:33:04.472 "enable_placement_id": 0, 00:33:04.472 "enable_zerocopy_send_server": true, 00:33:04.472 "enable_zerocopy_send_client": false, 00:33:04.472 "zerocopy_threshold": 0, 00:33:04.472 "tls_version": 0, 00:33:04.472 "enable_ktls": false 00:33:04.472 } 00:33:04.472 } 00:33:04.472 ] 00:33:04.472 }, 00:33:04.472 { 00:33:04.472 "subsystem": "vmd", 00:33:04.472 "config": [] 00:33:04.472 }, 00:33:04.472 { 00:33:04.473 "subsystem": "accel", 00:33:04.473 "config": [ 00:33:04.473 { 00:33:04.473 "method": "accel_set_options", 00:33:04.473 "params": { 00:33:04.473 "small_cache_size": 128, 00:33:04.473 "large_cache_size": 16, 00:33:04.473 "task_count": 2048, 00:33:04.473 "sequence_count": 2048, 00:33:04.473 "buf_count": 2048 00:33:04.473 } 00:33:04.473 } 00:33:04.473 ] 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "subsystem": "bdev", 00:33:04.473 "config": [ 00:33:04.473 { 00:33:04.473 "method": "bdev_set_options", 00:33:04.473 "params": { 00:33:04.473 "bdev_io_pool_size": 65535, 00:33:04.473 "bdev_io_cache_size": 256, 00:33:04.473 "bdev_auto_examine": true, 00:33:04.473 "iobuf_small_cache_size": 128, 00:33:04.473 "iobuf_large_cache_size": 16 00:33:04.473 } 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "method": "bdev_raid_set_options", 00:33:04.473 "params": { 00:33:04.473 "process_window_size_kb": 1024, 00:33:04.473 "process_max_bandwidth_mb_sec": 0 00:33:04.473 } 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "method": "bdev_iscsi_set_options", 00:33:04.473 "params": { 00:33:04.473 "timeout_sec": 30 00:33:04.473 } 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "method": "bdev_nvme_set_options", 00:33:04.473 "params": { 00:33:04.473 "action_on_timeout": "none", 00:33:04.473 "timeout_us": 0, 00:33:04.473 "timeout_admin_us": 0, 00:33:04.473 "keep_alive_timeout_ms": 10000, 00:33:04.473 "arbitration_burst": 0, 00:33:04.473 "low_priority_weight": 0, 00:33:04.473 "medium_priority_weight": 0, 00:33:04.473 "high_priority_weight": 0, 00:33:04.473 "nvme_adminq_poll_period_us": 10000, 00:33:04.473 "nvme_ioq_poll_period_us": 0, 00:33:04.473 "io_queue_requests": 512, 00:33:04.473 "delay_cmd_submit": true, 00:33:04.473 "transport_retry_count": 4, 00:33:04.473 "bdev_retry_count": 3, 00:33:04.473 "transport_ack_timeout": 0, 00:33:04.473 "ctrlr_loss_timeout_sec": 0, 00:33:04.473 "reconnect_delay_sec": 0, 00:33:04.473 "fast_io_fail_timeout_sec": 0, 00:33:04.473 "disable_auto_failback": false, 00:33:04.473 "generate_uuids": false, 00:33:04.473 "transport_tos": 0, 00:33:04.473 "nvme_error_stat": false, 00:33:04.473 "rdma_srq_size": 0, 00:33:04.473 "io_path_stat": false, 00:33:04.473 "allow_accel_sequence": false, 00:33:04.473 "rdma_max_cq_size": 0, 00:33:04.473 "rdma_cm_event_timeout_ms": 0, 00:33:04.473 "dhchap_digests": [ 00:33:04.473 "sha256", 00:33:04.473 "sha384", 00:33:04.473 "sha512" 00:33:04.473 ], 00:33:04.473 "dhchap_dhgroups": [ 00:33:04.473 "null", 00:33:04.473 "ffdhe2048", 00:33:04.473 "ffdhe3072", 00:33:04.473 "ffdhe4096", 00:33:04.473 "ffdhe6144", 00:33:04.473 "ffdhe8192" 00:33:04.473 ] 00:33:04.473 } 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "method": "bdev_nvme_attach_controller", 00:33:04.473 "params": { 00:33:04.473 "name": "nvme0", 00:33:04.473 "trtype": "TCP", 00:33:04.473 "adrfam": "IPv4", 00:33:04.473 "traddr": "127.0.0.1", 00:33:04.473 "trsvcid": "4420", 00:33:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.473 "prchk_reftag": false, 00:33:04.473 "prchk_guard": false, 00:33:04.473 "ctrlr_loss_timeout_sec": 0, 00:33:04.473 "reconnect_delay_sec": 0, 00:33:04.473 "fast_io_fail_timeout_sec": 0, 00:33:04.473 "psk": "key0", 00:33:04.473 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.473 "hdgst": false, 00:33:04.473 "ddgst": false 00:33:04.473 } 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "method": "bdev_nvme_set_hotplug", 00:33:04.473 "params": { 00:33:04.473 "period_us": 100000, 00:33:04.473 "enable": false 00:33:04.473 } 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "method": "bdev_wait_for_examine" 00:33:04.473 } 00:33:04.473 ] 00:33:04.473 }, 00:33:04.473 { 00:33:04.473 "subsystem": "nbd", 00:33:04.473 "config": [] 00:33:04.473 } 00:33:04.473 ] 00:33:04.473 }' 00:33:04.473 17:12:24 keyring_file -- keyring/file.sh@114 -- # killprocess 1664500 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1664500 ']' 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1664500 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1664500 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1664500' 00:33:04.473 killing process with pid 1664500 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@969 -- # kill 1664500 00:33:04.473 Received shutdown signal, test time was about 1.000000 seconds 00:33:04.473 00:33:04.473 Latency(us) 00:33:04.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.473 =================================================================================================================== 00:33:04.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.473 17:12:24 keyring_file -- common/autotest_common.sh@974 -- # wait 1664500 00:33:04.735 17:12:24 keyring_file -- keyring/file.sh@117 -- # bperfpid=1666163 00:33:04.735 17:12:24 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1666163 /var/tmp/bperf.sock 00:33:04.735 17:12:24 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1666163 ']' 00:33:04.735 17:12:24 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.735 17:12:24 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:04.735 17:12:24 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:04.735 17:12:24 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.735 17:12:24 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:04.735 17:12:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:04.735 17:12:24 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:04.735 "subsystems": [ 00:33:04.735 { 00:33:04.735 "subsystem": "keyring", 00:33:04.735 "config": [ 00:33:04.735 { 00:33:04.735 "method": "keyring_file_add_key", 00:33:04.735 "params": { 00:33:04.735 "name": "key0", 00:33:04.735 "path": "/tmp/tmp.cAvB73uKZN" 00:33:04.735 } 00:33:04.735 }, 00:33:04.735 { 00:33:04.735 "method": "keyring_file_add_key", 00:33:04.735 "params": { 00:33:04.735 "name": "key1", 00:33:04.735 "path": "/tmp/tmp.oCMncvaRtR" 00:33:04.735 } 00:33:04.735 } 00:33:04.735 ] 00:33:04.735 }, 00:33:04.735 { 00:33:04.735 "subsystem": "iobuf", 00:33:04.735 "config": [ 00:33:04.735 { 00:33:04.735 "method": "iobuf_set_options", 00:33:04.735 "params": { 00:33:04.735 "small_pool_count": 8192, 00:33:04.735 "large_pool_count": 1024, 00:33:04.735 "small_bufsize": 8192, 00:33:04.736 "large_bufsize": 135168 00:33:04.736 } 00:33:04.736 } 00:33:04.736 ] 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "subsystem": "sock", 00:33:04.736 "config": [ 00:33:04.736 { 00:33:04.736 "method": "sock_set_default_impl", 00:33:04.736 "params": { 00:33:04.736 "impl_name": "posix" 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "sock_impl_set_options", 00:33:04.736 "params": { 00:33:04.736 "impl_name": "ssl", 00:33:04.736 "recv_buf_size": 4096, 00:33:04.736 "send_buf_size": 4096, 00:33:04.736 "enable_recv_pipe": true, 00:33:04.736 "enable_quickack": false, 00:33:04.736 "enable_placement_id": 0, 00:33:04.736 "enable_zerocopy_send_server": true, 00:33:04.736 "enable_zerocopy_send_client": false, 00:33:04.736 "zerocopy_threshold": 0, 00:33:04.736 "tls_version": 0, 00:33:04.736 "enable_ktls": false 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "sock_impl_set_options", 00:33:04.736 "params": { 00:33:04.736 "impl_name": "posix", 00:33:04.736 "recv_buf_size": 2097152, 00:33:04.736 "send_buf_size": 2097152, 00:33:04.736 "enable_recv_pipe": true, 00:33:04.736 "enable_quickack": false, 00:33:04.736 "enable_placement_id": 0, 00:33:04.736 "enable_zerocopy_send_server": true, 00:33:04.736 "enable_zerocopy_send_client": false, 00:33:04.736 "zerocopy_threshold": 0, 00:33:04.736 "tls_version": 0, 00:33:04.736 "enable_ktls": false 00:33:04.736 } 00:33:04.736 } 00:33:04.736 ] 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "subsystem": "vmd", 00:33:04.736 "config": [] 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "subsystem": "accel", 00:33:04.736 "config": [ 00:33:04.736 { 00:33:04.736 "method": "accel_set_options", 00:33:04.736 "params": { 00:33:04.736 "small_cache_size": 128, 00:33:04.736 "large_cache_size": 16, 00:33:04.736 "task_count": 2048, 00:33:04.736 "sequence_count": 2048, 00:33:04.736 "buf_count": 2048 00:33:04.736 } 00:33:04.736 } 00:33:04.736 ] 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "subsystem": "bdev", 00:33:04.736 "config": [ 00:33:04.736 { 00:33:04.736 "method": "bdev_set_options", 00:33:04.736 "params": { 00:33:04.736 "bdev_io_pool_size": 65535, 00:33:04.736 "bdev_io_cache_size": 256, 00:33:04.736 "bdev_auto_examine": true, 00:33:04.736 "iobuf_small_cache_size": 128, 00:33:04.736 "iobuf_large_cache_size": 16 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "bdev_raid_set_options", 00:33:04.736 "params": { 00:33:04.736 "process_window_size_kb": 1024, 00:33:04.736 "process_max_bandwidth_mb_sec": 0 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "bdev_iscsi_set_options", 00:33:04.736 "params": { 00:33:04.736 "timeout_sec": 30 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "bdev_nvme_set_options", 00:33:04.736 "params": { 00:33:04.736 "action_on_timeout": "none", 00:33:04.736 "timeout_us": 0, 00:33:04.736 "timeout_admin_us": 0, 00:33:04.736 "keep_alive_timeout_ms": 10000, 00:33:04.736 "arbitration_burst": 0, 00:33:04.736 "low_priority_weight": 0, 00:33:04.736 "medium_priority_weight": 0, 00:33:04.736 "high_priority_weight": 0, 00:33:04.736 "nvme_adminq_poll_period_us": 10000, 00:33:04.736 "nvme_ioq_poll_period_us": 0, 00:33:04.736 "io_queue_requests": 512, 00:33:04.736 "delay_cmd_submit": true, 00:33:04.736 "transport_retry_count": 4, 00:33:04.736 "bdev_retry_count": 3, 00:33:04.736 "transport_ack_timeout": 0, 00:33:04.736 "ctrlr_loss_timeout_sec": 0, 00:33:04.736 "reconnect_delay_sec": 0, 00:33:04.736 "fast_io_fail_timeout_sec": 0, 00:33:04.736 "disable_auto_failback": false, 00:33:04.736 "generate_uuids": false, 00:33:04.736 "transport_tos": 0, 00:33:04.736 "nvme_error_stat": false, 00:33:04.736 "rdma_srq_size": 0, 00:33:04.736 "io_path_stat": false, 00:33:04.736 "allow_accel_sequence": false, 00:33:04.736 "rdma_max_cq_size": 0, 00:33:04.736 "rdma_cm_event_timeout_ms": 0, 00:33:04.736 "dhchap_digests": [ 00:33:04.736 "sha256", 00:33:04.736 "sha384", 00:33:04.736 "sha512" 00:33:04.736 ], 00:33:04.736 "dhchap_dhgroups": [ 00:33:04.736 "null", 00:33:04.736 "ffdhe2048", 00:33:04.736 "ffdhe3072", 00:33:04.736 "ffdhe4096", 00:33:04.736 "ffdhe6144", 00:33:04.736 "ffdhe8192" 00:33:04.736 ] 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "bdev_nvme_attach_controller", 00:33:04.736 "params": { 00:33:04.736 "name": "nvme0", 00:33:04.736 "trtype": "TCP", 00:33:04.736 "adrfam": "IPv4", 00:33:04.736 "traddr": "127.0.0.1", 00:33:04.736 "trsvcid": "4420", 00:33:04.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.736 "prchk_reftag": false, 00:33:04.736 "prchk_guard": false, 00:33:04.736 "ctrlr_loss_timeout_sec": 0, 00:33:04.736 "reconnect_delay_sec": 0, 00:33:04.736 "fast_io_fail_timeout_sec": 0, 00:33:04.736 "psk": "key0", 00:33:04.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.736 "hdgst": false, 00:33:04.736 "ddgst": false 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "bdev_nvme_set_hotplug", 00:33:04.736 "params": { 00:33:04.736 "period_us": 100000, 00:33:04.736 "enable": false 00:33:04.736 } 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "method": "bdev_wait_for_examine" 00:33:04.736 } 00:33:04.736 ] 00:33:04.736 }, 00:33:04.736 { 00:33:04.736 "subsystem": "nbd", 00:33:04.736 "config": [] 00:33:04.736 } 00:33:04.736 ] 00:33:04.736 }' 00:33:04.736 [2024-07-25 17:12:24.871073] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:33:04.736 [2024-07-25 17:12:24.871132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666163 ] 00:33:04.736 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.736 [2024-07-25 17:12:24.944320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.736 [2024-07-25 17:12:24.997855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.998 [2024-07-25 17:12:25.139566] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:05.571 17:12:25 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:05.571 17:12:25 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:05.571 17:12:25 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:05.571 17:12:25 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:05.571 17:12:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.571 17:12:25 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:05.571 17:12:25 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:05.571 17:12:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:05.571 17:12:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.571 17:12:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.571 17:12:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:05.571 17:12:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.832 17:12:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:05.832 17:12:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:05.832 17:12:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:05.832 17:12:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.832 17:12:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.832 17:12:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.832 17:12:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:06.093 17:12:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cAvB73uKZN /tmp/tmp.oCMncvaRtR 00:33:06.093 17:12:26 keyring_file -- keyring/file.sh@20 -- # killprocess 1666163 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1666163 ']' 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1666163 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666163 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666163' 00:33:06.093 killing process with pid 1666163 00:33:06.093 17:12:26 keyring_file -- common/autotest_common.sh@969 -- # kill 1666163 00:33:06.093 Received shutdown signal, test time was about 1.000000 seconds 00:33:06.093 00:33:06.094 Latency(us) 00:33:06.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.094 =================================================================================================================== 00:33:06.094 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:06.094 17:12:26 keyring_file -- common/autotest_common.sh@974 -- # wait 1666163 00:33:06.355 17:12:26 keyring_file -- keyring/file.sh@21 -- # killprocess 1664357 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1664357 ']' 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1664357 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1664357 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1664357' 00:33:06.355 killing process with pid 1664357 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@969 -- # kill 1664357 00:33:06.355 [2024-07-25 17:12:26.482587] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:06.355 17:12:26 keyring_file -- common/autotest_common.sh@974 -- # wait 1664357 00:33:06.617 00:33:06.617 real 0m11.060s 00:33:06.617 user 0m25.751s 00:33:06.617 sys 0m2.511s 00:33:06.617 17:12:26 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:06.617 17:12:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:06.617 ************************************ 00:33:06.617 END TEST keyring_file 00:33:06.617 ************************************ 00:33:06.617 17:12:26 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:33:06.617 17:12:26 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:06.617 17:12:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:06.617 17:12:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:06.617 17:12:26 -- common/autotest_common.sh@10 -- # set +x 00:33:06.617 ************************************ 00:33:06.617 START TEST keyring_linux 00:33:06.617 ************************************ 00:33:06.617 17:12:26 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:06.617 * Looking for test storage... 00:33:06.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:06.617 17:12:26 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:06.617 17:12:26 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.617 17:12:26 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.879 17:12:26 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.879 17:12:26 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.879 17:12:26 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.879 17:12:26 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.879 17:12:26 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.879 17:12:26 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.879 17:12:26 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.879 17:12:26 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:06.880 17:12:26 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:06.880 /tmp/:spdk-test:key0 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:06.880 17:12:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:06.880 17:12:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:06.880 /tmp/:spdk-test:key1 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1666728 00:33:06.880 17:12:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1666728 00:33:06.880 17:12:26 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1666728 ']' 00:33:06.880 17:12:26 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.880 17:12:26 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:06.880 17:12:26 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.880 17:12:26 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:06.880 17:12:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:06.880 [2024-07-25 17:12:27.044832] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:33:06.880 [2024-07-25 17:12:27.044927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666728 ] 00:33:06.880 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.880 [2024-07-25 17:12:27.109339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.141 [2024-07-25 17:12:27.183367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:07.714 17:12:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:07.714 [2024-07-25 17:12:27.820330] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.714 null0 00:33:07.714 [2024-07-25 17:12:27.852370] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:07.714 [2024-07-25 17:12:27.852778] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.714 17:12:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:07.714 973773808 00:33:07.714 17:12:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:07.714 884809846 00:33:07.714 17:12:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1666748 00:33:07.714 17:12:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1666748 /var/tmp/bperf.sock 00:33:07.714 17:12:27 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1666748 ']' 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.714 17:12:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:07.714 [2024-07-25 17:12:27.926848] Starting SPDK v24.09-pre git sha1 7b27bb4a4 / DPDK 24.03.0 initialization... 00:33:07.714 [2024-07-25 17:12:27.926896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666748 ] 00:33:07.714 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.976 [2024-07-25 17:12:28.001041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.976 [2024-07-25 17:12:28.054649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.549 17:12:28 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.549 17:12:28 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:08.549 17:12:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:08.549 17:12:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:08.809 17:12:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:08.809 17:12:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:08.810 17:12:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:08.810 17:12:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:09.071 [2024-07-25 17:12:29.189349] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:09.071 nvme0n1 00:33:09.071 17:12:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:09.071 17:12:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:09.071 17:12:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:09.071 17:12:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:09.071 17:12:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:09.071 17:12:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.332 17:12:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:09.332 17:12:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:09.332 17:12:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:09.332 17:12:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:09.332 17:12:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.332 17:12:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:09.332 17:12:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.332 17:12:29 keyring_linux -- keyring/linux.sh@25 -- # sn=973773808 00:33:09.593 17:12:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:09.593 17:12:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:09.593 17:12:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 973773808 == \9\7\3\7\7\3\8\0\8 ]] 00:33:09.593 17:12:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 973773808 00:33:09.593 17:12:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:09.593 17:12:29 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.593 Running I/O for 1 seconds... 00:33:10.537 00:33:10.537 Latency(us) 00:33:10.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.537 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:10.537 nvme0n1 : 1.02 6440.13 25.16 0.00 0.00 19739.02 4915.20 25449.81 00:33:10.537 =================================================================================================================== 00:33:10.537 Total : 6440.13 25.16 0.00 0.00 19739.02 4915.20 25449.81 00:33:10.537 0 00:33:10.537 17:12:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:10.537 17:12:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:10.798 17:12:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:10.798 17:12:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:10.798 17:12:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:10.798 17:12:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:10.798 17:12:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:10.798 17:12:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.798 17:12:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:10.798 17:12:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:10.798 17:12:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:10.798 17:12:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.798 17:12:31 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:10.798 17:12:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:11.059 [2024-07-25 17:12:31.201553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:11.059 [2024-07-25 17:12:31.201576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d20f0 (107): Transport endpoint is not connected 00:33:11.059 [2024-07-25 17:12:31.202572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d20f0 (9): Bad file descriptor 00:33:11.059 [2024-07-25 17:12:31.203573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:11.059 [2024-07-25 17:12:31.203581] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:11.059 [2024-07-25 17:12:31.203587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:11.059 request: 00:33:11.059 { 00:33:11.059 "name": "nvme0", 00:33:11.059 "trtype": "tcp", 00:33:11.059 "traddr": "127.0.0.1", 00:33:11.059 "adrfam": "ipv4", 00:33:11.059 "trsvcid": "4420", 00:33:11.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.059 "prchk_reftag": false, 00:33:11.059 "prchk_guard": false, 00:33:11.059 "hdgst": false, 00:33:11.059 "ddgst": false, 00:33:11.059 "psk": ":spdk-test:key1", 00:33:11.059 "method": "bdev_nvme_attach_controller", 00:33:11.059 "req_id": 1 00:33:11.059 } 00:33:11.059 Got JSON-RPC error response 00:33:11.059 response: 00:33:11.059 { 00:33:11.059 "code": -5, 00:33:11.059 "message": "Input/output error" 00:33:11.059 } 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@33 -- # sn=973773808 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 973773808 00:33:11.059 1 links removed 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@33 -- # sn=884809846 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 884809846 00:33:11.059 1 links removed 00:33:11.059 17:12:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1666748 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1666748 ']' 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1666748 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666748 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666748' 00:33:11.059 killing process with pid 1666748 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1666748 00:33:11.059 Received shutdown signal, test time was about 1.000000 seconds 00:33:11.059 00:33:11.059 Latency(us) 00:33:11.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.059 =================================================================================================================== 00:33:11.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.059 17:12:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1666748 00:33:11.320 17:12:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1666728 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1666728 ']' 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1666728 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666728 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666728' 00:33:11.320 killing process with pid 1666728 00:33:11.320 17:12:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1666728 00:33:11.321 17:12:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1666728 00:33:11.582 00:33:11.582 real 0m4.886s 00:33:11.582 user 0m8.511s 00:33:11.582 sys 0m1.153s 00:33:11.582 17:12:31 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.582 17:12:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:11.582 ************************************ 00:33:11.582 END TEST keyring_linux 00:33:11.582 ************************************ 00:33:11.582 17:12:31 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:33:11.582 17:12:31 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:11.582 17:12:31 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:11.582 17:12:31 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:11.582 17:12:31 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:33:11.582 17:12:31 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:33:11.582 17:12:31 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:33:11.582 17:12:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.582 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:33:11.582 17:12:31 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:33:11.582 17:12:31 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:11.582 17:12:31 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:11.582 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:33:19.763 INFO: APP EXITING 00:33:19.763 INFO: killing all VMs 00:33:19.763 INFO: killing vhost app 00:33:19.763 WARN: no vhost pid file found 00:33:19.763 INFO: EXIT DONE 00:33:22.313 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:22.313 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:22.313 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:22.313 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:22.313 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:22.575 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:22.575 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:22.835 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:26.141 Cleaning 00:33:26.141 Removing: /var/run/dpdk/spdk0/config 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:26.141 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:26.403 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:26.403 Removing: /var/run/dpdk/spdk1/config 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:26.403 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:26.403 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:26.403 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:26.403 Removing: /var/run/dpdk/spdk2/config 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:26.403 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:26.403 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:26.403 Removing: /var/run/dpdk/spdk3/config 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:26.403 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:26.403 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:26.403 Removing: /var/run/dpdk/spdk4/config 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:26.403 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:26.403 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:26.403 Removing: /dev/shm/bdev_svc_trace.1 00:33:26.403 Removing: /dev/shm/nvmf_trace.0 00:33:26.403 Removing: /dev/shm/spdk_tgt_trace.pid1216101 00:33:26.403 Removing: /var/run/dpdk/spdk0 00:33:26.403 Removing: /var/run/dpdk/spdk1 00:33:26.403 Removing: /var/run/dpdk/spdk2 00:33:26.403 Removing: /var/run/dpdk/spdk3 00:33:26.403 Removing: /var/run/dpdk/spdk4 00:33:26.403 Removing: /var/run/dpdk/spdk_pid1214492 00:33:26.403 Removing: /var/run/dpdk/spdk_pid1216101 00:33:26.403 Removing: /var/run/dpdk/spdk_pid1216679 00:33:26.403 Removing: /var/run/dpdk/spdk_pid1217762 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1218060 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1219170 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1219471 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1219701 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1220726 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1221449 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1221733 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1221983 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1222358 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1222755 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1223106 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1223358 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1223601 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1224905 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1228166 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1228529 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1228895 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1229146 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1229601 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1229690 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1230154 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1230317 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1230683 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1230709 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1231051 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1231121 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1231700 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1231871 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1232254 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1236736 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1242022 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1254179 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1255236 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1260461 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1260808 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1265857 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1272684 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1275816 00:33:26.664 Removing: /var/run/dpdk/spdk_pid1288181 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1298871 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1300890 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1301926 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1323108 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1327841 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1380863 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1387228 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1394180 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1401283 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1401285 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1402293 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1403295 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1404297 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1404965 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1404993 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1405306 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1405443 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1405564 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1406627 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1407628 00:33:26.665 Removing: /var/run/dpdk/spdk_pid1408655 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1409329 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1409332 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1409663 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1410957 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1412174 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1422719 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1454229 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1460057 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1461925 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1464120 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1464458 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1464601 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1464813 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1465425 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1467558 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1468634 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1469031 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1471717 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1472431 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1473156 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1478196 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1490131 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1494958 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1502332 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1504151 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1506074 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1511185 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1516114 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1524946 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1524961 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1529996 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1530334 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1530500 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1531004 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1531021 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1536589 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1537214 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1542519 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1545727 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1552260 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1558719 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1569230 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1577710 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1577757 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1599902 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1600604 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1601319 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1601943 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1602899 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1603677 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1604363 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1605051 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1610151 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1610468 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1618183 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1618401 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1621168 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1628324 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1628331 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1634045 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1636385 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1638655 00:33:26.926 Removing: /var/run/dpdk/spdk_pid1640082 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1642511 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1643823 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1653757 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1654425 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1655062 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1657729 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1658376 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1659044 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1664357 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1664500 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1666163 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1666728 00:33:27.188 Removing: /var/run/dpdk/spdk_pid1666748 00:33:27.188 Clean 00:33:27.188 17:12:47 -- common/autotest_common.sh@1451 -- # return 0 00:33:27.188 17:12:47 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:27.188 17:12:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:27.188 17:12:47 -- common/autotest_common.sh@10 -- # set +x 00:33:27.188 17:12:47 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:27.188 17:12:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:27.188 17:12:47 -- common/autotest_common.sh@10 -- # set +x 00:33:27.188 17:12:47 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:27.188 17:12:47 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:27.188 17:12:47 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:27.188 17:12:47 -- spdk/autotest.sh@395 -- # hash lcov 00:33:27.188 17:12:47 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:27.188 17:12:47 -- spdk/autotest.sh@397 -- # hostname 00:33:27.188 17:12:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:27.449 geninfo: WARNING: invalid characters removed from testname! 00:33:54.034 17:13:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:54.034 17:13:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:55.950 17:13:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:57.336 17:13:17 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:58.720 17:13:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:00.694 17:13:20 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:02.081 17:13:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:02.081 17:13:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.081 17:13:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:02.081 17:13:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.081 17:13:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.081 17:13:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.081 17:13:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.081 17:13:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.081 17:13:22 -- paths/export.sh@5 -- $ export PATH 00:34:02.081 17:13:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.081 17:13:22 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:02.081 17:13:22 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:02.081 17:13:22 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721920402.XXXXXX 00:34:02.081 17:13:22 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721920402.8b4bJS 00:34:02.081 17:13:22 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:02.081 17:13:22 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:02.081 17:13:22 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:02.081 17:13:22 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:02.081 17:13:22 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:02.081 17:13:22 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:02.081 17:13:22 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:02.081 17:13:22 -- common/autotest_common.sh@10 -- $ set +x 00:34:02.081 17:13:22 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:02.081 17:13:22 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:02.081 17:13:22 -- pm/common@17 -- $ local monitor 00:34:02.081 17:13:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:02.081 17:13:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:02.081 17:13:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:02.081 17:13:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:02.081 17:13:22 -- pm/common@21 -- $ date +%s 00:34:02.081 17:13:22 -- pm/common@25 -- $ sleep 1 00:34:02.081 17:13:22 -- pm/common@21 -- $ date +%s 00:34:02.081 17:13:22 -- pm/common@21 -- $ date +%s 00:34:02.081 17:13:22 -- pm/common@21 -- $ date +%s 00:34:02.081 17:13:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721920402 00:34:02.081 17:13:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721920402 00:34:02.081 17:13:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721920402 00:34:02.081 17:13:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721920402 00:34:02.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721920402_collect-vmstat.pm.log 00:34:02.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721920402_collect-cpu-load.pm.log 00:34:02.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721920402_collect-cpu-temp.pm.log 00:34:02.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721920402_collect-bmc-pm.bmc.pm.log 00:34:03.025 17:13:23 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:03.025 17:13:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:03.025 17:13:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:03.025 17:13:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:03.025 17:13:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:03.025 17:13:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:03.025 17:13:23 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:03.025 17:13:23 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:03.025 17:13:23 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:03.300 17:13:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:03.300 17:13:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:03.300 17:13:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:03.300 17:13:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:03.300 17:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:03.301 17:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:03.301 17:13:23 -- pm/common@44 -- $ pid=1679120 00:34:03.301 17:13:23 -- pm/common@50 -- $ kill -TERM 1679120 00:34:03.301 17:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:03.301 17:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:03.301 17:13:23 -- pm/common@44 -- $ pid=1679121 00:34:03.301 17:13:23 -- pm/common@50 -- $ kill -TERM 1679121 00:34:03.301 17:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:03.301 17:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:03.301 17:13:23 -- pm/common@44 -- $ pid=1679123 00:34:03.301 17:13:23 -- pm/common@50 -- $ kill -TERM 1679123 00:34:03.301 17:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:03.301 17:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:03.301 17:13:23 -- pm/common@44 -- $ pid=1679146 00:34:03.301 17:13:23 -- pm/common@50 -- $ sudo -E kill -TERM 1679146 00:34:03.301 + [[ -n 1093069 ]] 00:34:03.301 + sudo kill 1093069 00:34:03.318 [Pipeline] } 00:34:03.328 [Pipeline] // stage 00:34:03.332 [Pipeline] } 00:34:03.342 [Pipeline] // timeout 00:34:03.346 [Pipeline] } 00:34:03.361 [Pipeline] // catchError 00:34:03.364 [Pipeline] } 00:34:03.378 [Pipeline] // wrap 00:34:03.382 [Pipeline] } 00:34:03.394 [Pipeline] // catchError 00:34:03.403 [Pipeline] stage 00:34:03.405 [Pipeline] { (Epilogue) 00:34:03.418 [Pipeline] catchError 00:34:03.419 [Pipeline] { 00:34:03.433 [Pipeline] echo 00:34:03.435 Cleanup processes 00:34:03.440 [Pipeline] sh 00:34:03.729 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:03.729 1679229 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:03.729 1679668 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:03.745 [Pipeline] sh 00:34:04.035 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:04.035 ++ grep -v 'sudo pgrep' 00:34:04.035 ++ awk '{print $1}' 00:34:04.035 + sudo kill -9 1679229 00:34:04.049 [Pipeline] sh 00:34:04.340 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:16.591 [Pipeline] sh 00:34:16.879 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:16.879 Artifacts sizes are good 00:34:16.896 [Pipeline] archiveArtifacts 00:34:16.905 Archiving artifacts 00:34:17.102 [Pipeline] sh 00:34:17.391 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:17.409 [Pipeline] cleanWs 00:34:17.421 [WS-CLEANUP] Deleting project workspace... 00:34:17.421 [WS-CLEANUP] Deferred wipeout is used... 00:34:17.429 [WS-CLEANUP] done 00:34:17.431 [Pipeline] } 00:34:17.451 [Pipeline] // catchError 00:34:17.463 [Pipeline] sh 00:34:17.750 + logger -p user.info -t JENKINS-CI 00:34:17.761 [Pipeline] } 00:34:17.777 [Pipeline] // stage 00:34:17.782 [Pipeline] } 00:34:17.799 [Pipeline] // node 00:34:17.805 [Pipeline] End of Pipeline 00:34:17.839 Finished: SUCCESS